00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2438 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3703 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.569 The recommended git tool is: git 00:00:00.569 using credential 00000000-0000-0000-0000-000000000002 00:00:00.571 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.599 Fetching changes from the remote Git repository 00:00:00.604 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.618 Using shallow fetch with depth 1 00:00:00.618 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.618 > git --version # timeout=10 00:00:00.628 > git --version # 'git version 2.39.2' 00:00:00.628 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.640 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.640 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.183 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.198 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.212 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.212 > git config core.sparsecheckout # timeout=10 00:00:07.225 > git read-tree -mu HEAD # timeout=10 00:00:07.263 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.292 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.293 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.396 [Pipeline] Start of Pipeline 00:00:07.410 [Pipeline] library 00:00:07.411 Loading library shm_lib@master 00:00:07.411 Library shm_lib@master is cached. Copying from home. 00:00:07.426 [Pipeline] node 00:00:52.681 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:00:52.683 [Pipeline] { 00:00:52.693 [Pipeline] catchError 00:00:52.694 [Pipeline] { 00:00:52.707 [Pipeline] wrap 00:00:52.714 [Pipeline] { 00:00:52.723 [Pipeline] stage 00:00:52.725 [Pipeline] { (Prologue) 00:00:52.746 [Pipeline] echo 00:00:52.748 Node: VM-host-SM0 00:00:52.755 [Pipeline] cleanWs 00:00:52.764 [WS-CLEANUP] Deleting project workspace... 00:00:52.764 [WS-CLEANUP] Deferred wipeout is used... 00:00:52.771 [WS-CLEANUP] done 00:00:52.958 [Pipeline] setCustomBuildProperty 00:00:53.060 [Pipeline] httpRequest 00:00:53.484 [Pipeline] echo 00:00:53.485 Sorcerer 10.211.164.101 is alive 00:00:53.496 [Pipeline] retry 00:00:53.498 [Pipeline] { 00:00:53.512 [Pipeline] httpRequest 00:00:53.517 HttpMethod: GET 00:00:53.517 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:53.518 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:53.524 Response Code: HTTP/1.1 200 OK 00:00:53.524 Success: Status code 200 is in the accepted range: 200,404 00:00:53.525 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:00.388 [Pipeline] } 00:01:00.406 [Pipeline] // retry 00:01:00.414 [Pipeline] sh 00:01:00.696 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:00.712 [Pipeline] httpRequest 00:01:01.178 [Pipeline] echo 00:01:01.179 Sorcerer 10.211.164.101 is alive 00:01:01.188 [Pipeline] retry 00:01:01.190 [Pipeline] { 00:01:01.203 [Pipeline] httpRequest 00:01:01.207 HttpMethod: GET 00:01:01.208 URL: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:01:01.209 Sending request to url: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:01:01.213 Response Code: HTTP/1.1 200 OK 00:01:01.214 Success: Status code 200 is in the accepted range: 200,404 00:01:01.214 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:03:33.755 [Pipeline] } 00:03:33.772 [Pipeline] // retry 00:03:33.780 [Pipeline] sh 00:03:34.059 + tar --no-same-owner -xf spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:03:38.270 [Pipeline] sh 00:03:38.549 + git -C spdk log --oneline -n5 00:03:38.549 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:03:38.549 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:03:38.549 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:03:38.549 e2dfdf06c accel/mlx5: Register post_poller handler 00:03:38.549 3c8001115 accel/mlx5: More precise condition to update DB 00:03:38.568 [Pipeline] withCredentials 00:03:38.580 > git --version # timeout=10 00:03:38.595 > git --version # 'git version 2.39.2' 00:03:38.610 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:03:38.613 [Pipeline] { 00:03:38.623 [Pipeline] retry 00:03:38.625 [Pipeline] { 00:03:38.640 [Pipeline] sh 00:03:38.918 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:03:38.929 [Pipeline] } 00:03:38.947 [Pipeline] // retry 00:03:38.953 [Pipeline] } 00:03:38.968 [Pipeline] // withCredentials 00:03:38.978 [Pipeline] httpRequest 00:03:39.382 [Pipeline] echo 00:03:39.384 Sorcerer 10.211.164.101 is alive 00:03:39.394 [Pipeline] retry 00:03:39.396 [Pipeline] { 00:03:39.411 [Pipeline] httpRequest 00:03:39.417 HttpMethod: GET 00:03:39.418 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:39.418 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:39.419 Response Code: HTTP/1.1 200 OK 00:03:39.420 Success: Status code 200 is in the accepted range: 200,404 00:03:39.421 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:40.739 [Pipeline] } 00:03:40.756 [Pipeline] // retry 00:03:40.763 [Pipeline] sh 00:03:41.044 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:42.980 [Pipeline] sh 00:03:43.265 + git -C dpdk log --oneline -n5 00:03:43.266 caf0f5d395 version: 22.11.4 00:03:43.266 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:43.266 dc9c799c7d vhost: fix missing spinlock unlock 00:03:43.266 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:43.266 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:43.282 [Pipeline] writeFile 00:03:43.297 [Pipeline] sh 00:03:43.577 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:43.586 [Pipeline] sh 00:03:43.864 + cat autorun-spdk.conf 00:03:43.864 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.864 SPDK_TEST_NVME=1 00:03:43.864 SPDK_TEST_FTL=1 00:03:43.864 SPDK_TEST_ISAL=1 00:03:43.864 SPDK_RUN_ASAN=1 00:03:43.864 SPDK_RUN_UBSAN=1 00:03:43.864 SPDK_TEST_XNVME=1 00:03:43.864 SPDK_TEST_NVME_FDP=1 00:03:43.864 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:43.864 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:43.864 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:43.871 RUN_NIGHTLY=1 00:03:43.873 [Pipeline] } 00:03:43.887 [Pipeline] // stage 00:03:43.906 [Pipeline] stage 00:03:43.909 [Pipeline] { (Run VM) 00:03:43.925 [Pipeline] sh 00:03:44.212 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:44.212 + echo 'Start stage prepare_nvme.sh' 00:03:44.212 Start stage prepare_nvme.sh 00:03:44.213 + [[ -n 5 ]] 00:03:44.213 + disk_prefix=ex5 00:03:44.213 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:44.213 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:44.213 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:44.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:44.213 ++ SPDK_TEST_NVME=1 00:03:44.213 ++ SPDK_TEST_FTL=1 00:03:44.213 ++ SPDK_TEST_ISAL=1 00:03:44.213 ++ SPDK_RUN_ASAN=1 00:03:44.213 ++ SPDK_RUN_UBSAN=1 00:03:44.213 ++ SPDK_TEST_XNVME=1 00:03:44.213 ++ SPDK_TEST_NVME_FDP=1 00:03:44.213 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:44.213 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:44.213 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:44.213 ++ RUN_NIGHTLY=1 00:03:44.213 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:44.213 + nvme_files=() 00:03:44.213 + declare -A nvme_files 00:03:44.213 + backend_dir=/var/lib/libvirt/images/backends 00:03:44.213 + nvme_files['nvme.img']=5G 00:03:44.213 + nvme_files['nvme-cmb.img']=5G 00:03:44.213 + nvme_files['nvme-multi0.img']=4G 00:03:44.213 + nvme_files['nvme-multi1.img']=4G 00:03:44.213 + nvme_files['nvme-multi2.img']=4G 00:03:44.213 + nvme_files['nvme-openstack.img']=8G 00:03:44.213 + nvme_files['nvme-zns.img']=5G 00:03:44.213 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:44.213 + (( SPDK_TEST_FTL == 1 )) 00:03:44.213 + nvme_files["nvme-ftl.img"]=6G 00:03:44.213 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:44.213 + nvme_files["nvme-fdp.img"]=1G 00:03:44.213 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:03:44.213 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:44.213 + for nvme in "${!nvme_files[@]}" 00:03:44.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:03:44.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:44.472 + for nvme in "${!nvme_files[@]}" 00:03:44.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:03:44.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:44.472 + for nvme in "${!nvme_files[@]}" 00:03:44.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:03:44.472 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:44.472 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:03:44.472 + echo 'End stage prepare_nvme.sh' 00:03:44.472 End stage prepare_nvme.sh 00:03:44.483 [Pipeline] sh 00:03:44.764 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:44.764 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:44.764 00:03:44.764 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:44.764 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:44.764 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:44.764 HELP=0 00:03:44.764 DRY_RUN=0 00:03:44.764 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:03:44.764 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:44.764 NVME_AUTO_CREATE=0 00:03:44.764 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:03:44.764 NVME_CMB=,,,, 00:03:44.764 NVME_PMR=,,,, 00:03:44.764 NVME_ZNS=,,,, 00:03:44.764 NVME_MS=true,,,, 00:03:44.764 NVME_FDP=,,,on, 00:03:44.764 SPDK_VAGRANT_DISTRO=fedora39 00:03:44.764 SPDK_VAGRANT_VMCPU=10 00:03:44.764 SPDK_VAGRANT_VMRAM=12288 00:03:44.764 SPDK_VAGRANT_PROVIDER=libvirt 00:03:44.764 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:44.764 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:44.764 SPDK_OPENSTACK_NETWORK=0 00:03:44.764 VAGRANT_PACKAGE_BOX=0 00:03:44.764 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:44.764 FORCE_DISTRO=true 00:03:44.764 VAGRANT_BOX_VERSION= 00:03:44.764 EXTRA_VAGRANTFILES= 00:03:44.764 NIC_MODEL=e1000 00:03:44.765 00:03:44.765 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:44.765 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:48.976 Bringing machine 'default' up with 'libvirt' provider... 00:03:49.234 ==> default: Creating image (snapshot of base box volume). 00:03:49.234 ==> default: Creating domain with the following settings... 00:03:49.234 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733499037_8ae290b84f87a33dca05 00:03:49.234 ==> default: -- Domain type: kvm 00:03:49.234 ==> default: -- Cpus: 10 00:03:49.234 ==> default: -- Feature: acpi 00:03:49.234 ==> default: -- Feature: apic 00:03:49.234 ==> default: -- Feature: pae 00:03:49.234 ==> default: -- Memory: 12288M 00:03:49.234 ==> default: -- Memory Backing: hugepages: 00:03:49.234 ==> default: -- Management MAC: 00:03:49.234 ==> default: -- Loader: 00:03:49.234 ==> default: -- Nvram: 00:03:49.234 ==> default: -- Base box: spdk/fedora39 00:03:49.234 ==> default: -- Storage pool: default 00:03:49.234 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733499037_8ae290b84f87a33dca05.img (20G) 00:03:49.234 ==> default: -- Volume Cache: default 00:03:49.234 ==> default: -- Kernel: 00:03:49.234 ==> default: -- Initrd: 00:03:49.234 ==> default: -- Graphics Type: vnc 00:03:49.234 ==> default: -- Graphics Port: -1 00:03:49.234 ==> default: -- Graphics IP: 127.0.0.1 00:03:49.234 ==> default: -- Graphics Password: Not defined 00:03:49.234 ==> default: -- Video Type: cirrus 00:03:49.234 ==> default: -- Video VRAM: 9216 00:03:49.234 ==> default: -- Sound Type: 00:03:49.234 ==> default: -- Keymap: en-us 00:03:49.234 ==> default: -- TPM Path: 00:03:49.234 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:49.234 ==> default: -- Command line args: 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:49.234 ==> default: -> value=-drive, 00:03:49.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:49.234 ==> default: -> value=-device, 00:03:49.234 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:49.492 ==> default: Creating shared folders metadata... 00:03:49.492 ==> default: Starting domain. 00:03:52.024 ==> default: Waiting for domain to get an IP address... 00:04:06.899 ==> default: Waiting for SSH to become available... 00:04:08.794 ==> default: Configuring and enabling network interfaces... 00:04:12.980 default: SSH address: 192.168.121.2:22 00:04:12.980 default: SSH username: vagrant 00:04:12.980 default: SSH auth method: private key 00:04:15.507 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:23.609 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:04:28.874 ==> default: Mounting SSHFS shared folder... 00:04:30.776 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:30.776 ==> default: Checking Mount.. 00:04:32.192 ==> default: Folder Successfully Mounted! 00:04:32.192 ==> default: Running provisioner: file... 00:04:33.130 default: ~/.gitconfig => .gitconfig 00:04:33.388 00:04:33.388 SUCCESS! 00:04:33.388 00:04:33.388 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:33.388 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:33.388 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:33.388 00:04:33.397 [Pipeline] } 00:04:33.412 [Pipeline] // stage 00:04:33.421 [Pipeline] dir 00:04:33.422 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:04:33.424 [Pipeline] { 00:04:33.437 [Pipeline] catchError 00:04:33.438 [Pipeline] { 00:04:33.451 [Pipeline] sh 00:04:33.729 + vagrant ssh-config --host vagrant 00:04:33.729 + sed -ne /^Host/,$p 00:04:33.729 + tee ssh_conf 00:04:37.915 Host vagrant 00:04:37.915 HostName 192.168.121.2 00:04:37.915 User vagrant 00:04:37.915 Port 22 00:04:37.915 UserKnownHostsFile /dev/null 00:04:37.915 StrictHostKeyChecking no 00:04:37.915 PasswordAuthentication no 00:04:37.915 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:37.915 IdentitiesOnly yes 00:04:37.915 LogLevel FATAL 00:04:37.915 ForwardAgent yes 00:04:37.915 ForwardX11 yes 00:04:37.915 00:04:37.928 [Pipeline] withEnv 00:04:37.930 [Pipeline] { 00:04:37.944 [Pipeline] sh 00:04:38.220 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:38.220 source /etc/os-release 00:04:38.220 [[ -e /image.version ]] && img=$(< /image.version) 00:04:38.220 # Minimal, systemd-like check. 00:04:38.220 if [[ -e /.dockerenv ]]; then 00:04:38.220 # Clear garbage from the node's name: 00:04:38.220 # agt-er_autotest_547-896 -> autotest_547-896 00:04:38.220 # $HOSTNAME is the actual container id 00:04:38.220 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:38.220 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:38.220 # We can assume this is a mount from a host where container is running, 00:04:38.220 # so fetch its hostname to easily identify the target swarm worker. 00:04:38.220 container="$(< /etc/hostname) ($agent)" 00:04:38.220 else 00:04:38.220 # Fallback 00:04:38.220 container=$agent 00:04:38.220 fi 00:04:38.220 fi 00:04:38.220 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:38.220 00:04:38.488 [Pipeline] } 00:04:38.503 [Pipeline] // withEnv 00:04:38.511 [Pipeline] setCustomBuildProperty 00:04:38.526 [Pipeline] stage 00:04:38.528 [Pipeline] { (Tests) 00:04:38.544 [Pipeline] sh 00:04:38.820 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:38.834 [Pipeline] sh 00:04:39.111 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:39.383 [Pipeline] timeout 00:04:39.383 Timeout set to expire in 50 min 00:04:39.385 [Pipeline] { 00:04:39.398 [Pipeline] sh 00:04:39.673 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:40.239 HEAD is now at a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:04:40.250 [Pipeline] sh 00:04:40.526 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:40.796 [Pipeline] sh 00:04:41.073 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:41.375 [Pipeline] sh 00:04:41.650 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:41.650 ++ readlink -f spdk_repo 00:04:41.650 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:41.650 + [[ -n /home/vagrant/spdk_repo ]] 00:04:41.650 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:41.650 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:41.650 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:41.650 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:41.650 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:41.650 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:41.650 + cd /home/vagrant/spdk_repo 00:04:41.650 + source /etc/os-release 00:04:41.650 ++ NAME='Fedora Linux' 00:04:41.650 ++ VERSION='39 (Cloud Edition)' 00:04:41.650 ++ ID=fedora 00:04:41.650 ++ VERSION_ID=39 00:04:41.650 ++ VERSION_CODENAME= 00:04:41.650 ++ PLATFORM_ID=platform:f39 00:04:41.650 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:41.650 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:41.650 ++ LOGO=fedora-logo-icon 00:04:41.650 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:41.650 ++ HOME_URL=https://fedoraproject.org/ 00:04:41.650 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:41.650 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:41.650 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:41.650 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:41.650 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:41.650 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:41.650 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:41.650 ++ SUPPORT_END=2024-11-12 00:04:41.650 ++ VARIANT='Cloud Edition' 00:04:41.650 ++ VARIANT_ID=cloud 00:04:41.650 + uname -a 00:04:41.650 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:41.650 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.473 Hugepages 00:04:42.473 node hugesize free / total 00:04:42.473 node0 1048576kB 0 / 0 00:04:42.473 node0 2048kB 0 / 0 00:04:42.473 00:04:42.473 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.473 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.473 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:42.473 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:42.473 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:42.473 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:42.473 + rm -f /tmp/spdk-ld-path 00:04:42.473 + source autorun-spdk.conf 00:04:42.473 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.473 ++ SPDK_TEST_NVME=1 00:04:42.473 ++ SPDK_TEST_FTL=1 00:04:42.473 ++ SPDK_TEST_ISAL=1 00:04:42.473 ++ SPDK_RUN_ASAN=1 00:04:42.473 ++ SPDK_RUN_UBSAN=1 00:04:42.473 ++ SPDK_TEST_XNVME=1 00:04:42.473 ++ SPDK_TEST_NVME_FDP=1 00:04:42.473 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:42.473 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:42.473 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:42.473 ++ RUN_NIGHTLY=1 00:04:42.473 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:42.473 + [[ -n '' ]] 00:04:42.473 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:42.473 + for M in /var/spdk/build-*-manifest.txt 00:04:42.473 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:42.473 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.474 + for M in /var/spdk/build-*-manifest.txt 00:04:42.474 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:42.474 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.474 + for M in /var/spdk/build-*-manifest.txt 00:04:42.474 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:42.474 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:42.474 ++ uname 00:04:42.474 + [[ Linux == \L\i\n\u\x ]] 00:04:42.474 + sudo dmesg -T 00:04:42.732 + sudo dmesg --clear 00:04:42.732 + dmesg_pid=6038 00:04:42.732 + sudo dmesg -Tw 00:04:42.732 + [[ Fedora Linux == FreeBSD ]] 00:04:42.732 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.732 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:42.733 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:42.733 + [[ -x /usr/src/fio-static/fio ]] 00:04:42.733 + export FIO_BIN=/usr/src/fio-static/fio 00:04:42.733 + FIO_BIN=/usr/src/fio-static/fio 00:04:42.733 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:42.733 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:42.733 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:42.733 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.733 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:42.733 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:42.733 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.733 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:42.733 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.733 15:31:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:42.733 15:31:31 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:42.733 15:31:31 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:04:42.733 15:31:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:42.733 15:31:31 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.733 15:31:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:42.733 15:31:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.733 15:31:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:42.733 15:31:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:42.733 15:31:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.733 15:31:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.733 15:31:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.733 15:31:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.733 15:31:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.733 15:31:31 -- paths/export.sh@5 -- $ export PATH 00:04:42.733 15:31:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.733 15:31:31 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.733 15:31:31 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:42.733 15:31:31 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733499091.XXXXXX 00:04:42.733 15:31:31 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733499091.5Q0SQe 00:04:42.733 15:31:31 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:42.733 15:31:31 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:04:42.733 15:31:31 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:42.733 15:31:31 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:04:42.733 15:31:31 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:42.733 15:31:31 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:42.733 15:31:31 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:42.733 15:31:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:42.733 15:31:31 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.733 15:31:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:04:42.733 15:31:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:42.733 15:31:31 -- pm/common@17 -- $ local monitor 00:04:42.733 15:31:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.733 15:31:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.733 15:31:31 -- pm/common@25 -- $ sleep 1 00:04:42.733 15:31:31 -- pm/common@21 -- $ date +%s 00:04:42.733 15:31:31 -- pm/common@21 -- $ date +%s 00:04:42.733 15:31:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499091 00:04:42.733 15:31:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733499091 00:04:42.733 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499091_collect-vmstat.pm.log 00:04:42.733 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733499091_collect-cpu-load.pm.log 00:04:44.106 15:31:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:44.106 15:31:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:44.106 15:31:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:44.106 15:31:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:44.106 15:31:32 -- spdk/autobuild.sh@16 -- $ date -u 00:04:44.106 Fri Dec 6 03:31:32 PM UTC 2024 00:04:44.106 15:31:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:44.106 v25.01-pre-303-ga5e6ecf28 00:04:44.106 15:31:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:44.106 15:31:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:44.106 15:31:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:44.106 15:31:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:44.106 15:31:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:44.106 ************************************ 00:04:44.106 START TEST asan 00:04:44.106 ************************************ 00:04:44.106 using asan 00:04:44.106 15:31:32 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:44.106 00:04:44.106 real 0m0.000s 00:04:44.106 user 0m0.000s 00:04:44.106 sys 0m0.000s 00:04:44.106 15:31:32 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:44.106 ************************************ 00:04:44.106 END TEST asan 00:04:44.106 ************************************ 00:04:44.106 15:31:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:44.106 15:31:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:44.106 15:31:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:44.106 15:31:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:44.106 15:31:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:44.106 15:31:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:44.106 ************************************ 00:04:44.106 START TEST ubsan 00:04:44.106 ************************************ 00:04:44.106 using ubsan 00:04:44.106 15:31:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:44.106 00:04:44.106 real 0m0.000s 00:04:44.106 user 0m0.000s 00:04:44.106 sys 0m0.000s 00:04:44.106 15:31:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:44.106 15:31:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:44.106 ************************************ 00:04:44.106 END TEST ubsan 00:04:44.106 ************************************ 00:04:44.106 15:31:32 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:04:44.106 15:31:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:04:44.106 15:31:32 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:04:44.106 15:31:32 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:04:44.106 15:31:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:44.106 15:31:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:44.106 ************************************ 00:04:44.106 START TEST build_native_dpdk 00:04:44.106 ************************************ 00:04:44.106 15:31:32 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:04:44.106 caf0f5d395 version: 22.11.4 00:04:44.106 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:44.106 dc9c799c7d vhost: fix missing spinlock unlock 00:04:44.106 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:44.106 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:04:44.106 15:31:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:04:44.106 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:04:44.107 patching file config/rte_config.h 00:04:44.107 Hunk #1 succeeded at 60 (offset 1 line). 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:04:44.107 patching file lib/pcapng/rte_pcapng.c 00:04:44.107 Hunk #1 succeeded at 110 (offset -18 lines). 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:04:44.107 15:31:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:04:44.107 15:31:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:04:50.688 The Meson build system 00:04:50.688 Version: 1.5.0 00:04:50.688 Source dir: /home/vagrant/spdk_repo/dpdk 00:04:50.688 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:04:50.688 Build type: native build 00:04:50.688 Program cat found: YES (/usr/bin/cat) 00:04:50.688 Project name: DPDK 00:04:50.688 Project version: 22.11.4 00:04:50.688 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:50.688 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:50.688 Host machine cpu family: x86_64 00:04:50.688 Host machine cpu: x86_64 00:04:50.688 Message: ## Building in Developer Mode ## 00:04:50.688 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:50.688 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:04:50.688 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:04:50.688 Program objdump found: YES (/usr/bin/objdump) 00:04:50.688 Program python3 found: YES (/usr/bin/python3) 00:04:50.688 Program cat found: YES (/usr/bin/cat) 00:04:50.688 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:04:50.688 Checking for size of "void *" : 8 00:04:50.688 Checking for size of "void *" : 8 (cached) 00:04:50.688 Library m found: YES 00:04:50.688 Library numa found: YES 00:04:50.688 Has header "numaif.h" : YES 00:04:50.688 Library fdt found: NO 00:04:50.688 Library execinfo found: NO 00:04:50.688 Has header "execinfo.h" : YES 00:04:50.688 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:50.688 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:50.688 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:50.688 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:50.688 Run-time dependency openssl found: YES 3.1.1 00:04:50.688 Run-time dependency libpcap found: YES 1.10.4 00:04:50.688 Has header "pcap.h" with dependency libpcap: YES 00:04:50.688 Compiler for C supports arguments -Wcast-qual: YES 00:04:50.688 Compiler for C supports arguments -Wdeprecated: YES 00:04:50.688 Compiler for C supports arguments -Wformat: YES 00:04:50.688 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:50.688 Compiler for C supports arguments -Wformat-security: NO 00:04:50.688 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:50.688 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:50.688 Compiler for C supports arguments -Wnested-externs: YES 00:04:50.688 Compiler for C supports arguments -Wold-style-definition: YES 00:04:50.688 Compiler for C supports arguments -Wpointer-arith: YES 00:04:50.688 Compiler for C supports arguments -Wsign-compare: YES 00:04:50.688 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:50.688 Compiler for C supports arguments -Wundef: YES 00:04:50.688 Compiler for C supports arguments -Wwrite-strings: YES 00:04:50.688 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:50.688 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:50.688 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:50.688 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:50.688 Compiler for C supports arguments -mavx512f: YES 00:04:50.688 Checking if "AVX512 checking" compiles: YES 00:04:50.688 Fetching value of define "__SSE4_2__" : 1 00:04:50.688 Fetching value of define "__AES__" : 1 00:04:50.688 Fetching value of define "__AVX__" : 1 00:04:50.688 Fetching value of define "__AVX2__" : 1 00:04:50.688 Fetching value of define "__AVX512BW__" : (undefined) 00:04:50.688 Fetching value of define "__AVX512CD__" : (undefined) 00:04:50.688 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:50.688 Fetching value of define "__AVX512F__" : (undefined) 00:04:50.688 Fetching value of define "__AVX512VL__" : (undefined) 00:04:50.688 Fetching value of define "__PCLMUL__" : 1 00:04:50.688 Fetching value of define "__RDRND__" : 1 00:04:50.688 Fetching value of define "__RDSEED__" : 1 00:04:50.688 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:50.688 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:50.688 Message: lib/kvargs: Defining dependency "kvargs" 00:04:50.688 Message: lib/telemetry: Defining dependency "telemetry" 00:04:50.688 Checking for function "getentropy" : YES 00:04:50.688 Message: lib/eal: Defining dependency "eal" 00:04:50.688 Message: lib/ring: Defining dependency "ring" 00:04:50.688 Message: lib/rcu: Defining dependency "rcu" 00:04:50.688 Message: lib/mempool: Defining dependency "mempool" 00:04:50.688 Message: lib/mbuf: Defining dependency "mbuf" 00:04:50.688 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:50.688 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:50.688 Compiler for C supports arguments -mpclmul: YES 00:04:50.688 Compiler for C supports arguments -maes: YES 00:04:50.688 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:50.688 Compiler for C supports arguments -mavx512bw: YES 00:04:50.688 Compiler for C supports arguments -mavx512dq: YES 00:04:50.688 Compiler for C supports arguments -mavx512vl: YES 00:04:50.688 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:50.688 Compiler for C supports arguments -mavx2: YES 00:04:50.688 Compiler for C supports arguments -mavx: YES 00:04:50.688 Message: lib/net: Defining dependency "net" 00:04:50.688 Message: lib/meter: Defining dependency "meter" 00:04:50.688 Message: lib/ethdev: Defining dependency "ethdev" 00:04:50.688 Message: lib/pci: Defining dependency "pci" 00:04:50.688 Message: lib/cmdline: Defining dependency "cmdline" 00:04:50.689 Message: lib/metrics: Defining dependency "metrics" 00:04:50.689 Message: lib/hash: Defining dependency "hash" 00:04:50.689 Message: lib/timer: Defining dependency "timer" 00:04:50.689 Fetching value of define "__AVX2__" : 1 (cached) 00:04:50.689 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:04:50.689 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:04:50.689 Message: lib/acl: Defining dependency "acl" 00:04:50.689 Message: lib/bbdev: Defining dependency "bbdev" 00:04:50.689 Message: lib/bitratestats: Defining dependency "bitratestats" 00:04:50.689 Run-time dependency libelf found: YES 0.191 00:04:50.689 Message: lib/bpf: Defining dependency "bpf" 00:04:50.689 Message: lib/cfgfile: Defining dependency "cfgfile" 00:04:50.689 Message: lib/compressdev: Defining dependency "compressdev" 00:04:50.689 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:50.689 Message: lib/distributor: Defining dependency "distributor" 00:04:50.689 Message: lib/efd: Defining dependency "efd" 00:04:50.689 Message: lib/eventdev: Defining dependency "eventdev" 00:04:50.689 Message: lib/gpudev: Defining dependency "gpudev" 00:04:50.689 Message: lib/gro: Defining dependency "gro" 00:04:50.689 Message: lib/gso: Defining dependency "gso" 00:04:50.689 Message: lib/ip_frag: Defining dependency "ip_frag" 00:04:50.689 Message: lib/jobstats: Defining dependency "jobstats" 00:04:50.689 Message: lib/latencystats: Defining dependency "latencystats" 00:04:50.689 Message: lib/lpm: Defining dependency "lpm" 00:04:50.689 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512IFMA__" : (undefined) 00:04:50.689 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:04:50.689 Message: lib/member: Defining dependency "member" 00:04:50.689 Message: lib/pcapng: Defining dependency "pcapng" 00:04:50.689 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:50.689 Message: lib/power: Defining dependency "power" 00:04:50.689 Message: lib/rawdev: Defining dependency "rawdev" 00:04:50.689 Message: lib/regexdev: Defining dependency "regexdev" 00:04:50.689 Message: lib/dmadev: Defining dependency "dmadev" 00:04:50.689 Message: lib/rib: Defining dependency "rib" 00:04:50.689 Message: lib/reorder: Defining dependency "reorder" 00:04:50.689 Message: lib/sched: Defining dependency "sched" 00:04:50.689 Message: lib/security: Defining dependency "security" 00:04:50.689 Message: lib/stack: Defining dependency "stack" 00:04:50.689 Has header "linux/userfaultfd.h" : YES 00:04:50.689 Message: lib/vhost: Defining dependency "vhost" 00:04:50.689 Message: lib/ipsec: Defining dependency "ipsec" 00:04:50.689 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:50.689 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:04:50.689 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:04:50.689 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:50.689 Message: lib/fib: Defining dependency "fib" 00:04:50.689 Message: lib/port: Defining dependency "port" 00:04:50.689 Message: lib/pdump: Defining dependency "pdump" 00:04:50.689 Message: lib/table: Defining dependency "table" 00:04:50.689 Message: lib/pipeline: Defining dependency "pipeline" 00:04:50.689 Message: lib/graph: Defining dependency "graph" 00:04:50.689 Message: lib/node: Defining dependency "node" 00:04:50.689 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:50.689 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:50.689 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:50.689 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:50.689 Compiler for C supports arguments -Wno-sign-compare: YES 00:04:50.689 Compiler for C supports arguments -Wno-unused-value: YES 00:04:50.689 Compiler for C supports arguments -Wno-format: YES 00:04:50.689 Compiler for C supports arguments -Wno-format-security: YES 00:04:50.689 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:04:51.621 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:51.621 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:04:51.621 Compiler for C supports arguments -Wno-unused-parameter: YES 00:04:51.621 Fetching value of define "__AVX2__" : 1 (cached) 00:04:51.621 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:51.621 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:51.621 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:51.621 Compiler for C supports arguments -march=skylake-avx512: YES 00:04:51.621 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:04:51.621 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:51.621 Configuring doxy-api.conf using configuration 00:04:51.621 Program sphinx-build found: NO 00:04:51.621 Configuring rte_build_config.h using configuration 00:04:51.621 Message: 00:04:51.621 ================= 00:04:51.621 Applications Enabled 00:04:51.621 ================= 00:04:51.621 00:04:51.621 apps: 00:04:51.621 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:04:51.621 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:04:51.621 test-security-perf, 00:04:51.621 00:04:51.621 Message: 00:04:51.621 ================= 00:04:51.621 Libraries Enabled 00:04:51.621 ================= 00:04:51.621 00:04:51.621 libs: 00:04:51.621 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:04:51.621 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:04:51.621 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:04:51.621 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:04:51.621 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:04:51.621 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:04:51.621 table, pipeline, graph, node, 00:04:51.621 00:04:51.621 Message: 00:04:51.621 =============== 00:04:51.621 Drivers Enabled 00:04:51.621 =============== 00:04:51.621 00:04:51.621 common: 00:04:51.621 00:04:51.621 bus: 00:04:51.621 pci, vdev, 00:04:51.621 mempool: 00:04:51.621 ring, 00:04:51.621 dma: 00:04:51.621 00:04:51.621 net: 00:04:51.621 i40e, 00:04:51.621 raw: 00:04:51.621 00:04:51.621 crypto: 00:04:51.621 00:04:51.621 compress: 00:04:51.621 00:04:51.621 regex: 00:04:51.621 00:04:51.621 vdpa: 00:04:51.621 00:04:51.621 event: 00:04:51.621 00:04:51.621 baseband: 00:04:51.621 00:04:51.621 gpu: 00:04:51.621 00:04:51.621 00:04:51.621 Message: 00:04:51.621 ================= 00:04:51.621 Content Skipped 00:04:51.621 ================= 00:04:51.621 00:04:51.621 apps: 00:04:51.621 00:04:51.621 libs: 00:04:51.621 kni: explicitly disabled via build config (deprecated lib) 00:04:51.621 flow_classify: explicitly disabled via build config (deprecated lib) 00:04:51.621 00:04:51.621 drivers: 00:04:51.621 common/cpt: not in enabled drivers build config 00:04:51.621 common/dpaax: not in enabled drivers build config 00:04:51.621 common/iavf: not in enabled drivers build config 00:04:51.621 common/idpf: not in enabled drivers build config 00:04:51.621 common/mvep: not in enabled drivers build config 00:04:51.621 common/octeontx: not in enabled drivers build config 00:04:51.621 bus/auxiliary: not in enabled drivers build config 00:04:51.621 bus/dpaa: not in enabled drivers build config 00:04:51.621 bus/fslmc: not in enabled drivers build config 00:04:51.621 bus/ifpga: not in enabled drivers build config 00:04:51.621 bus/vmbus: not in enabled drivers build config 00:04:51.621 common/cnxk: not in enabled drivers build config 00:04:51.622 common/mlx5: not in enabled drivers build config 00:04:51.622 common/qat: not in enabled drivers build config 00:04:51.622 common/sfc_efx: not in enabled drivers build config 00:04:51.622 mempool/bucket: not in enabled drivers build config 00:04:51.622 mempool/cnxk: not in enabled drivers build config 00:04:51.622 mempool/dpaa: not in enabled drivers build config 00:04:51.622 mempool/dpaa2: not in enabled drivers build config 00:04:51.622 mempool/octeontx: not in enabled drivers build config 00:04:51.622 mempool/stack: not in enabled drivers build config 00:04:51.622 dma/cnxk: not in enabled drivers build config 00:04:51.622 dma/dpaa: not in enabled drivers build config 00:04:51.622 dma/dpaa2: not in enabled drivers build config 00:04:51.622 dma/hisilicon: not in enabled drivers build config 00:04:51.622 dma/idxd: not in enabled drivers build config 00:04:51.622 dma/ioat: not in enabled drivers build config 00:04:51.622 dma/skeleton: not in enabled drivers build config 00:04:51.622 net/af_packet: not in enabled drivers build config 00:04:51.622 net/af_xdp: not in enabled drivers build config 00:04:51.622 net/ark: not in enabled drivers build config 00:04:51.622 net/atlantic: not in enabled drivers build config 00:04:51.622 net/avp: not in enabled drivers build config 00:04:51.622 net/axgbe: not in enabled drivers build config 00:04:51.622 net/bnx2x: not in enabled drivers build config 00:04:51.622 net/bnxt: not in enabled drivers build config 00:04:51.622 net/bonding: not in enabled drivers build config 00:04:51.622 net/cnxk: not in enabled drivers build config 00:04:51.622 net/cxgbe: not in enabled drivers build config 00:04:51.622 net/dpaa: not in enabled drivers build config 00:04:51.622 net/dpaa2: not in enabled drivers build config 00:04:51.622 net/e1000: not in enabled drivers build config 00:04:51.622 net/ena: not in enabled drivers build config 00:04:51.622 net/enetc: not in enabled drivers build config 00:04:51.622 net/enetfec: not in enabled drivers build config 00:04:51.622 net/enic: not in enabled drivers build config 00:04:51.622 net/failsafe: not in enabled drivers build config 00:04:51.622 net/fm10k: not in enabled drivers build config 00:04:51.622 net/gve: not in enabled drivers build config 00:04:51.622 net/hinic: not in enabled drivers build config 00:04:51.622 net/hns3: not in enabled drivers build config 00:04:51.622 net/iavf: not in enabled drivers build config 00:04:51.622 net/ice: not in enabled drivers build config 00:04:51.622 net/idpf: not in enabled drivers build config 00:04:51.622 net/igc: not in enabled drivers build config 00:04:51.622 net/ionic: not in enabled drivers build config 00:04:51.622 net/ipn3ke: not in enabled drivers build config 00:04:51.622 net/ixgbe: not in enabled drivers build config 00:04:51.622 net/kni: not in enabled drivers build config 00:04:51.622 net/liquidio: not in enabled drivers build config 00:04:51.622 net/mana: not in enabled drivers build config 00:04:51.622 net/memif: not in enabled drivers build config 00:04:51.622 net/mlx4: not in enabled drivers build config 00:04:51.622 net/mlx5: not in enabled drivers build config 00:04:51.622 net/mvneta: not in enabled drivers build config 00:04:51.622 net/mvpp2: not in enabled drivers build config 00:04:51.622 net/netvsc: not in enabled drivers build config 00:04:51.622 net/nfb: not in enabled drivers build config 00:04:51.622 net/nfp: not in enabled drivers build config 00:04:51.622 net/ngbe: not in enabled drivers build config 00:04:51.622 net/null: not in enabled drivers build config 00:04:51.622 net/octeontx: not in enabled drivers build config 00:04:51.622 net/octeon_ep: not in enabled drivers build config 00:04:51.622 net/pcap: not in enabled drivers build config 00:04:51.622 net/pfe: not in enabled drivers build config 00:04:51.622 net/qede: not in enabled drivers build config 00:04:51.622 net/ring: not in enabled drivers build config 00:04:51.622 net/sfc: not in enabled drivers build config 00:04:51.622 net/softnic: not in enabled drivers build config 00:04:51.622 net/tap: not in enabled drivers build config 00:04:51.622 net/thunderx: not in enabled drivers build config 00:04:51.622 net/txgbe: not in enabled drivers build config 00:04:51.622 net/vdev_netvsc: not in enabled drivers build config 00:04:51.622 net/vhost: not in enabled drivers build config 00:04:51.622 net/virtio: not in enabled drivers build config 00:04:51.622 net/vmxnet3: not in enabled drivers build config 00:04:51.622 raw/cnxk_bphy: not in enabled drivers build config 00:04:51.622 raw/cnxk_gpio: not in enabled drivers build config 00:04:51.622 raw/dpaa2_cmdif: not in enabled drivers build config 00:04:51.622 raw/ifpga: not in enabled drivers build config 00:04:51.622 raw/ntb: not in enabled drivers build config 00:04:51.622 raw/skeleton: not in enabled drivers build config 00:04:51.622 crypto/armv8: not in enabled drivers build config 00:04:51.622 crypto/bcmfs: not in enabled drivers build config 00:04:51.622 crypto/caam_jr: not in enabled drivers build config 00:04:51.622 crypto/ccp: not in enabled drivers build config 00:04:51.622 crypto/cnxk: not in enabled drivers build config 00:04:51.622 crypto/dpaa_sec: not in enabled drivers build config 00:04:51.622 crypto/dpaa2_sec: not in enabled drivers build config 00:04:51.622 crypto/ipsec_mb: not in enabled drivers build config 00:04:51.622 crypto/mlx5: not in enabled drivers build config 00:04:51.622 crypto/mvsam: not in enabled drivers build config 00:04:51.622 crypto/nitrox: not in enabled drivers build config 00:04:51.622 crypto/null: not in enabled drivers build config 00:04:51.622 crypto/octeontx: not in enabled drivers build config 00:04:51.622 crypto/openssl: not in enabled drivers build config 00:04:51.622 crypto/scheduler: not in enabled drivers build config 00:04:51.622 crypto/uadk: not in enabled drivers build config 00:04:51.622 crypto/virtio: not in enabled drivers build config 00:04:51.622 compress/isal: not in enabled drivers build config 00:04:51.622 compress/mlx5: not in enabled drivers build config 00:04:51.622 compress/octeontx: not in enabled drivers build config 00:04:51.622 compress/zlib: not in enabled drivers build config 00:04:51.622 regex/mlx5: not in enabled drivers build config 00:04:51.622 regex/cn9k: not in enabled drivers build config 00:04:51.622 vdpa/ifc: not in enabled drivers build config 00:04:51.622 vdpa/mlx5: not in enabled drivers build config 00:04:51.622 vdpa/sfc: not in enabled drivers build config 00:04:51.622 event/cnxk: not in enabled drivers build config 00:04:51.622 event/dlb2: not in enabled drivers build config 00:04:51.622 event/dpaa: not in enabled drivers build config 00:04:51.622 event/dpaa2: not in enabled drivers build config 00:04:51.622 event/dsw: not in enabled drivers build config 00:04:51.622 event/opdl: not in enabled drivers build config 00:04:51.622 event/skeleton: not in enabled drivers build config 00:04:51.622 event/sw: not in enabled drivers build config 00:04:51.622 event/octeontx: not in enabled drivers build config 00:04:51.622 baseband/acc: not in enabled drivers build config 00:04:51.622 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:04:51.622 baseband/fpga_lte_fec: not in enabled drivers build config 00:04:51.622 baseband/la12xx: not in enabled drivers build config 00:04:51.622 baseband/null: not in enabled drivers build config 00:04:51.622 baseband/turbo_sw: not in enabled drivers build config 00:04:51.622 gpu/cuda: not in enabled drivers build config 00:04:51.622 00:04:51.622 00:04:51.622 Build targets in project: 314 00:04:51.622 00:04:51.622 DPDK 22.11.4 00:04:51.622 00:04:51.622 User defined options 00:04:51.622 libdir : lib 00:04:51.622 prefix : /home/vagrant/spdk_repo/dpdk/build 00:04:51.622 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:04:51.622 c_link_args : 00:04:51.622 enable_docs : false 00:04:51.622 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:04:51.622 enable_kmods : false 00:04:51.622 machine : native 00:04:51.622 tests : false 00:04:51.622 00:04:51.622 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:51.622 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:04:51.880 15:31:40 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:04:52.137 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:52.137 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:04:52.137 [2/743] Generating lib/rte_kvargs_def with a custom command 00:04:52.137 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:04:52.137 [4/743] Generating lib/rte_telemetry_def with a custom command 00:04:52.137 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:52.395 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:52.395 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:52.395 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:52.395 [9/743] Linking static target lib/librte_kvargs.a 00:04:52.395 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:52.395 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:52.395 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:52.395 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:52.395 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:52.652 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:52.652 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:52.652 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:52.652 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:52.652 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:52.652 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.652 [21/743] Linking target lib/librte_kvargs.so.23.0 00:04:52.909 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:52.909 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:04:52.909 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:52.909 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:52.909 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:52.909 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:53.166 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:53.166 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:53.166 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:53.166 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:53.166 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:53.166 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:53.166 [34/743] Linking static target lib/librte_telemetry.a 00:04:53.431 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:53.431 [36/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:04:53.431 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:53.431 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:53.431 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:53.431 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:53.431 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:53.689 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:53.948 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.948 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:53.948 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:53.948 [46/743] Linking target lib/librte_telemetry.so.23.0 00:04:53.948 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:53.948 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:54.206 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:54.206 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:04:54.206 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:54.206 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:54.206 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:54.206 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:54.206 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:54.206 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:54.464 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:54.464 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:54.464 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:54.464 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:54.464 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:54.464 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:54.464 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:54.464 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:54.464 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:04:54.722 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:54.722 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:54.722 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:54.722 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:54.722 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:54.722 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:54.980 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:54.980 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:54.980 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:54.980 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:54.980 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:54.980 [77/743] Generating lib/rte_eal_def with a custom command 00:04:54.980 [78/743] Generating lib/rte_eal_mingw with a custom command 00:04:54.980 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:54.980 [80/743] Generating lib/rte_ring_def with a custom command 00:04:54.980 [81/743] Generating lib/rte_ring_mingw with a custom command 00:04:55.239 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:55.239 [83/743] Generating lib/rte_rcu_def with a custom command 00:04:55.239 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:55.239 [85/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:55.239 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:04:55.239 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:55.239 [88/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:55.498 [89/743] Linking static target lib/librte_ring.a 00:04:55.498 [90/743] Generating lib/rte_mempool_def with a custom command 00:04:55.498 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:04:55.498 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:55.498 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:55.757 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.016 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:56.016 [96/743] Linking static target lib/librte_eal.a 00:04:56.306 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:56.306 [98/743] Generating lib/rte_mbuf_def with a custom command 00:04:56.306 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:04:56.306 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:56.306 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:56.570 [102/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:56.829 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:56.829 [104/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:56.829 [105/743] Linking static target lib/librte_rcu.a 00:04:57.085 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:57.085 [107/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:57.085 [108/743] Linking static target lib/librte_mempool.a 00:04:57.343 [109/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:57.343 [110/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:57.343 [111/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.343 [112/743] Generating lib/rte_net_def with a custom command 00:04:57.343 [113/743] Generating lib/rte_net_mingw with a custom command 00:04:57.343 [114/743] Generating lib/rte_meter_def with a custom command 00:04:57.601 [115/743] Generating lib/rte_meter_mingw with a custom command 00:04:57.601 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:57.601 [117/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:57.601 [118/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:57.601 [119/743] Linking static target lib/librte_meter.a 00:04:57.858 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:57.858 [121/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:58.116 [122/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:58.116 [123/743] Linking static target lib/librte_net.a 00:04:58.116 [124/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.375 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:58.375 [126/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.375 [127/743] Linking static target lib/librte_mbuf.a 00:04:58.375 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.632 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:58.890 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:58.890 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:59.148 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:59.148 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:59.148 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.407 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:59.407 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:59.665 [137/743] Generating lib/rte_ethdev_def with a custom command 00:04:59.665 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:04:59.665 [139/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:59.665 [140/743] Linking static target lib/librte_pci.a 00:04:59.923 [141/743] Generating lib/rte_pci_def with a custom command 00:04:59.923 [142/743] Generating lib/rte_pci_mingw with a custom command 00:05:00.181 [143/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:00.181 [144/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.181 [145/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:00.438 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:00.438 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:00.438 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:00.438 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:00.438 [150/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:00.438 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:00.696 [152/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:00.696 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:00.696 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:00.696 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:00.696 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:00.696 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:00.696 [158/743] Generating lib/rte_cmdline_def with a custom command 00:05:00.696 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:05:00.696 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:00.954 [161/743] Generating lib/rte_metrics_def with a custom command 00:05:00.954 [162/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:00.954 [163/743] Generating lib/rte_metrics_mingw with a custom command 00:05:00.954 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:00.954 [165/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:01.212 [166/743] Generating lib/rte_hash_def with a custom command 00:05:01.212 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:01.212 [168/743] Generating lib/rte_hash_mingw with a custom command 00:05:01.212 [169/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:01.212 [170/743] Generating lib/rte_timer_def with a custom command 00:05:01.212 [171/743] Generating lib/rte_timer_mingw with a custom command 00:05:01.470 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:01.470 [173/743] Linking static target lib/librte_cmdline.a 00:05:02.095 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:05:02.095 [175/743] Linking static target lib/librte_metrics.a 00:05:02.095 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:02.095 [177/743] Linking static target lib/librte_timer.a 00:05:02.354 [178/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:02.611 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.611 [180/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.611 [181/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:02.611 [182/743] Linking static target lib/librte_ethdev.a 00:05:02.869 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:05:03.127 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.384 [185/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:05:03.384 [186/743] Generating lib/rte_acl_def with a custom command 00:05:03.685 [187/743] Generating lib/rte_acl_mingw with a custom command 00:05:03.685 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:05:03.685 [189/743] Generating lib/rte_bbdev_def with a custom command 00:05:03.685 [190/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:05:03.685 [191/743] Generating lib/rte_bbdev_mingw with a custom command 00:05:03.685 [192/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:05:03.685 [193/743] Generating lib/rte_bitratestats_def with a custom command 00:05:03.976 [194/743] Generating lib/rte_bitratestats_mingw with a custom command 00:05:04.567 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:05:04.567 [196/743] Linking static target lib/librte_bitratestats.a 00:05:04.567 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:05:04.826 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.826 [199/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:05:05.391 [200/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:05:05.391 [201/743] Linking static target lib/librte_bbdev.a 00:05:05.391 [202/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:05:05.391 [203/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:05:06.325 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:05:06.325 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.325 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:05:06.582 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:05:06.839 [208/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:06.839 [209/743] Generating lib/rte_bpf_def with a custom command 00:05:06.839 [210/743] Linking static target lib/librte_hash.a 00:05:06.839 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:05:06.839 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:05:06.839 [213/743] Generating lib/rte_cfgfile_def with a custom command 00:05:06.839 [214/743] Generating lib/rte_cfgfile_mingw with a custom command 00:05:07.096 [215/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.428 [216/743] Linking target lib/librte_eal.so.23.0 00:05:07.428 [217/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:05:07.428 [218/743] Linking static target lib/librte_cfgfile.a 00:05:07.428 [219/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:05:07.428 [220/743] Linking static target lib/acl/libavx512_tmp.a 00:05:07.428 [221/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:05:07.703 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:05:07.703 [223/743] Linking target lib/librte_ring.so.23.0 00:05:07.703 [224/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.703 [225/743] Linking target lib/librte_meter.so.23.0 00:05:07.703 [226/743] Linking target lib/librte_pci.so.23.0 00:05:07.961 [227/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:05:07.961 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:05:07.961 [229/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:05:07.961 [230/743] Linking target lib/librte_rcu.so.23.0 00:05:07.961 [231/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.961 [232/743] Linking target lib/librte_mempool.so.23.0 00:05:07.961 [233/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:05:07.961 [234/743] Linking target lib/librte_timer.so.23.0 00:05:07.961 [235/743] Linking target lib/librte_cfgfile.so.23.0 00:05:07.961 [236/743] Generating lib/rte_compressdev_def with a custom command 00:05:07.961 [237/743] Generating lib/rte_compressdev_mingw with a custom command 00:05:08.220 [238/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:05:08.220 [239/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:05:08.220 [240/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:05:08.220 [241/743] Linking target lib/librte_mbuf.so.23.0 00:05:08.220 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:08.220 [243/743] Generating lib/rte_cryptodev_def with a custom command 00:05:08.479 [244/743] Generating lib/rte_cryptodev_mingw with a custom command 00:05:08.479 [245/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:05:08.479 [246/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:08.479 [247/743] Linking target lib/librte_net.so.23.0 00:05:08.479 [248/743] Linking target lib/librte_bbdev.so.23.0 00:05:08.738 [249/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:05:08.738 [250/743] Linking target lib/librte_cmdline.so.23.0 00:05:08.738 [251/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:05:08.738 [252/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:05:08.738 [253/743] Linking target lib/librte_hash.so.23.0 00:05:08.738 [254/743] Linking static target lib/librte_acl.a 00:05:08.738 [255/743] Linking static target lib/librte_bpf.a 00:05:08.738 [256/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:08.996 [257/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:05:08.996 [258/743] Generating lib/rte_distributor_def with a custom command 00:05:08.996 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:05:09.256 [260/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:09.256 [261/743] Linking static target lib/librte_compressdev.a 00:05:09.256 [262/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.256 [263/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.256 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:05:09.256 [265/743] Generating lib/rte_efd_def with a custom command 00:05:09.256 [266/743] Generating lib/rte_efd_mingw with a custom command 00:05:09.514 [267/743] Linking target lib/librte_acl.so.23.0 00:05:09.514 [268/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:05:09.514 [269/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:09.514 [270/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:05:09.772 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:05:09.772 [272/743] Linking static target lib/librte_distributor.a 00:05:10.030 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.030 [274/743] Linking target lib/librte_ethdev.so.23.0 00:05:10.030 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:05:10.030 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.030 [277/743] Linking target lib/librte_distributor.so.23.0 00:05:10.288 [278/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:05:10.288 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:05:10.288 [280/743] Linking target lib/librte_metrics.so.23.0 00:05:10.288 [281/743] Linking target lib/librte_bpf.so.23.0 00:05:10.546 [282/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.546 [283/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:05:10.546 [284/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:05:10.546 [285/743] Linking target lib/librte_compressdev.so.23.0 00:05:10.546 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:05:10.546 [287/743] Generating lib/rte_eventdev_def with a custom command 00:05:10.546 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:05:10.546 [289/743] Generating lib/rte_gpudev_def with a custom command 00:05:10.546 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:05:10.806 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:05:10.806 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:05:10.806 [293/743] Linking static target lib/librte_efd.a 00:05:11.073 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.073 [295/743] Linking target lib/librte_efd.so.23.0 00:05:11.073 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:05:11.347 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:11.347 [298/743] Linking static target lib/librte_cryptodev.a 00:05:11.605 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:05:11.605 [300/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:05:11.605 [301/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:05:11.605 [302/743] Generating lib/rte_gro_def with a custom command 00:05:11.605 [303/743] Generating lib/rte_gro_mingw with a custom command 00:05:11.863 [304/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:05:11.863 [305/743] Linking static target lib/librte_gpudev.a 00:05:12.120 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:05:12.120 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:05:12.120 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:05:12.377 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:05:12.635 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:05:12.635 [311/743] Linking static target lib/librte_gro.a 00:05:12.635 [312/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:05:12.635 [313/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:05:12.635 [314/743] Generating lib/rte_gso_def with a custom command 00:05:12.635 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:05:12.635 [316/743] Generating lib/rte_gso_mingw with a custom command 00:05:12.891 [317/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.891 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:05:12.891 [319/743] Linking target lib/librte_gro.so.23.0 00:05:12.891 [320/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:05:13.147 [321/743] Linking static target lib/librte_eventdev.a 00:05:13.147 [322/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.147 [323/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:05:13.147 [324/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:05:13.147 [325/743] Linking static target lib/librte_gso.a 00:05:13.147 [326/743] Linking target lib/librte_gpudev.so.23.0 00:05:13.147 [327/743] Generating lib/rte_ip_frag_def with a custom command 00:05:13.147 [328/743] Generating lib/rte_ip_frag_mingw with a custom command 00:05:13.403 [329/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.403 [330/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:05:13.403 [331/743] Linking static target lib/librte_jobstats.a 00:05:13.403 [332/743] Generating lib/rte_jobstats_def with a custom command 00:05:13.403 [333/743] Generating lib/rte_jobstats_mingw with a custom command 00:05:13.403 [334/743] Linking target lib/librte_gso.so.23.0 00:05:13.713 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:05:13.713 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:05:13.713 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:05:13.713 [338/743] Generating lib/rte_latencystats_def with a custom command 00:05:13.713 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:05:13.713 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:05:13.713 [341/743] Generating lib/rte_lpm_def with a custom command 00:05:13.713 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:05:13.713 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.970 [344/743] Linking target lib/librte_jobstats.so.23.0 00:05:13.970 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:05:13.970 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.970 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:05:13.970 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:05:13.970 [349/743] Linking static target lib/librte_ip_frag.a 00:05:14.227 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:05:14.227 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.484 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:05:14.484 [353/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:05:14.484 [354/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:05:14.484 [355/743] Linking static target lib/librte_latencystats.a 00:05:14.484 [356/743] Generating lib/rte_member_def with a custom command 00:05:14.742 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:05:14.742 [358/743] Generating lib/rte_member_mingw with a custom command 00:05:14.742 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:05:14.742 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:05:14.742 [361/743] Generating lib/rte_pcapng_def with a custom command 00:05:14.742 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:05:14.742 [363/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:14.742 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.742 [365/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:05:14.742 [366/743] Linking target lib/librte_latencystats.so.23.0 00:05:14.742 [367/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:05:15.000 [368/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:15.000 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:15.000 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:15.265 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:05:15.265 [372/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.265 [373/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:05:15.265 [374/743] Linking target lib/librte_eventdev.so.23.0 00:05:15.265 [375/743] Generating lib/rte_power_def with a custom command 00:05:15.265 [376/743] Generating lib/rte_power_mingw with a custom command 00:05:15.547 [377/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:05:15.547 [378/743] Generating lib/rte_rawdev_def with a custom command 00:05:15.547 [379/743] Generating lib/rte_rawdev_mingw with a custom command 00:05:15.547 [380/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:15.547 [381/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:05:15.547 [382/743] Generating lib/rte_regexdev_def with a custom command 00:05:15.547 [383/743] Linking static target lib/librte_lpm.a 00:05:15.547 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:05:15.547 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:15.805 [386/743] Generating lib/rte_dmadev_def with a custom command 00:05:15.805 [387/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:05:15.805 [388/743] Linking static target lib/librte_rawdev.a 00:05:15.805 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:05:15.805 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:05:15.805 [391/743] Linking static target lib/librte_pcapng.a 00:05:15.805 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:05:15.805 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:15.805 [394/743] Generating lib/rte_rib_def with a custom command 00:05:15.805 [395/743] Generating lib/rte_rib_mingw with a custom command 00:05:16.063 [396/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.063 [397/743] Linking target lib/librte_lpm.so.23.0 00:05:16.063 [398/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:16.063 [399/743] Linking static target lib/librte_dmadev.a 00:05:16.063 [400/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:16.063 [401/743] Linking static target lib/librte_power.a 00:05:16.063 [402/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.063 [403/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:05:16.321 [404/743] Generating lib/rte_reorder_def with a custom command 00:05:16.321 [405/743] Linking target lib/librte_pcapng.so.23.0 00:05:16.321 [406/743] Generating lib/rte_reorder_mingw with a custom command 00:05:16.321 [407/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.321 [408/743] Linking target lib/librte_rawdev.so.23.0 00:05:16.321 [409/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:05:16.321 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:05:16.321 [411/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:05:16.321 [412/743] Linking static target lib/librte_regexdev.a 00:05:16.321 [413/743] Linking static target lib/librte_member.a 00:05:16.579 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:05:16.579 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:05:16.579 [416/743] Generating lib/rte_sched_def with a custom command 00:05:16.579 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:05:16.579 [418/743] Generating lib/rte_sched_mingw with a custom command 00:05:16.579 [419/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:05:16.579 [420/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.579 [421/743] Generating lib/rte_security_def with a custom command 00:05:16.579 [422/743] Generating lib/rte_security_mingw with a custom command 00:05:16.579 [423/743] Linking target lib/librte_dmadev.so.23.0 00:05:16.838 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:05:16.838 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:05:16.838 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:16.838 [427/743] Linking static target lib/librte_reorder.a 00:05:16.838 [428/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:05:16.838 [429/743] Generating lib/rte_stack_def with a custom command 00:05:16.838 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:05:16.838 [431/743] Generating lib/rte_stack_mingw with a custom command 00:05:16.838 [432/743] Linking static target lib/librte_stack.a 00:05:16.838 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.096 [434/743] Linking target lib/librte_member.so.23.0 00:05:17.096 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:17.096 [436/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:05:17.096 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.096 [438/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.096 [439/743] Linking static target lib/librte_rib.a 00:05:17.096 [440/743] Linking target lib/librte_reorder.so.23.0 00:05:17.096 [441/743] Linking target lib/librte_stack.so.23.0 00:05:17.353 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.353 [443/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.353 [444/743] Linking target lib/librte_power.so.23.0 00:05:17.353 [445/743] Linking target lib/librte_regexdev.so.23.0 00:05:17.612 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:17.612 [447/743] Linking static target lib/librte_security.a 00:05:17.612 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.612 [449/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:17.612 [450/743] Linking target lib/librte_rib.so.23.0 00:05:17.612 [451/743] Generating lib/rte_vhost_def with a custom command 00:05:17.612 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:05:17.870 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:05:17.870 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:17.870 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.127 [456/743] Linking target lib/librte_security.so.23.0 00:05:18.127 [457/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:05:18.127 [458/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:18.386 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:05:18.386 [460/743] Linking static target lib/librte_sched.a 00:05:18.644 [461/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:05:18.644 [462/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.903 [463/743] Linking target lib/librte_sched.so.23.0 00:05:18.903 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:18.903 [465/743] Generating lib/rte_ipsec_def with a custom command 00:05:18.903 [466/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:05:18.903 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:05:18.903 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:05:18.903 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:19.162 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:05:19.162 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:05:19.421 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:05:19.421 [473/743] Generating lib/rte_fib_def with a custom command 00:05:19.421 [474/743] Generating lib/rte_fib_mingw with a custom command 00:05:19.421 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:05:19.421 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:05:19.679 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:05:19.679 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:05:19.679 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:05:19.679 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:05:19.679 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:05:19.679 [482/743] Linking static target lib/librte_ipsec.a 00:05:20.245 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.245 [484/743] Linking target lib/librte_ipsec.so.23.0 00:05:20.245 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:05:20.502 [486/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:05:20.502 [487/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:05:20.502 [488/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:05:20.502 [489/743] Linking static target lib/librte_fib.a 00:05:20.502 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:05:20.502 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:05:20.761 [492/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:05:21.019 [493/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.019 [494/743] Linking target lib/librte_fib.so.23.0 00:05:21.584 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:05:21.584 [496/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:05:21.584 [497/743] Generating lib/rte_port_def with a custom command 00:05:21.584 [498/743] Generating lib/rte_port_mingw with a custom command 00:05:21.584 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:05:21.584 [500/743] Generating lib/rte_pdump_def with a custom command 00:05:21.584 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:05:21.584 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:05:21.584 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:05:21.841 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:05:21.841 [505/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:05:22.098 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:05:22.098 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:05:22.098 [508/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:05:22.098 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:05:22.098 [510/743] Linking static target lib/librte_port.a 00:05:22.664 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:05:22.664 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:05:22.664 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.664 [514/743] Linking target lib/librte_port.so.23.0 00:05:22.664 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:05:22.664 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:05:22.921 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:05:22.921 [518/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:05:22.921 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:05:22.921 [520/743] Linking static target lib/librte_pdump.a 00:05:23.487 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.487 [522/743] Linking target lib/librte_pdump.so.23.0 00:05:23.487 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:05:23.487 [524/743] Generating lib/rte_table_def with a custom command 00:05:23.487 [525/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:05:23.487 [526/743] Generating lib/rte_table_mingw with a custom command 00:05:23.487 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:05:23.744 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:05:23.744 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:05:24.002 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:05:24.002 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:05:24.002 [532/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:05:24.002 [533/743] Linking static target lib/librte_table.a 00:05:24.002 [534/743] Generating lib/rte_pipeline_def with a custom command 00:05:24.259 [535/743] Generating lib/rte_pipeline_mingw with a custom command 00:05:24.259 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:05:24.825 [537/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:24.825 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:05:24.825 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.825 [540/743] Linking target lib/librte_table.so.23.0 00:05:24.825 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:05:25.083 [542/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:05:25.083 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:05:25.083 [544/743] Generating lib/rte_graph_def with a custom command 00:05:25.083 [545/743] Generating lib/rte_graph_mingw with a custom command 00:05:25.340 [546/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:05:25.340 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:05:25.598 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:05:25.598 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:05:25.598 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:05:25.887 [551/743] Linking static target lib/librte_graph.a 00:05:25.887 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:05:26.163 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:05:26.163 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:05:26.163 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:05:26.421 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:05:26.421 [557/743] Generating lib/rte_node_def with a custom command 00:05:26.421 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:05:26.421 [559/743] Generating lib/rte_node_mingw with a custom command 00:05:26.678 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:26.678 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.678 [562/743] Linking target lib/librte_graph.so.23.0 00:05:26.678 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:26.935 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:05:26.935 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:05:26.935 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:26.935 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:05:26.935 [568/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:05:26.935 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:05:26.935 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:27.208 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:05:27.208 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:05:27.208 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:27.208 [574/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:05:27.208 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:05:27.208 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:05:27.208 [577/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:05:27.208 [578/743] Linking static target lib/librte_node.a 00:05:27.208 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:27.208 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:27.208 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:27.466 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.466 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:27.466 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:27.466 [585/743] Linking static target drivers/librte_bus_vdev.a 00:05:27.466 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:27.466 [587/743] Linking target lib/librte_node.so.23.0 00:05:27.466 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:27.466 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:27.724 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.724 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:27.724 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:27.724 [593/743] Linking target drivers/librte_bus_vdev.so.23.0 00:05:27.724 [594/743] Linking static target drivers/librte_bus_pci.a 00:05:27.724 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:27.982 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:05:28.239 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.239 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:05:28.239 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:05:28.239 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:05:28.239 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:05:28.239 [602/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:05:28.497 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:28.497 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:28.497 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:05:28.755 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:28.755 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:28.755 [608/743] Linking static target drivers/librte_mempool_ring.a 00:05:28.755 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:28.755 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:05:29.320 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:05:29.577 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:05:29.835 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:05:29.835 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:05:30.093 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:05:30.350 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:05:30.350 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:05:30.607 [618/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:05:30.865 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:05:30.865 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:05:31.122 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:05:31.122 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:05:31.122 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:05:31.122 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:05:31.411 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:05:32.357 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:05:32.615 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:05:32.615 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:05:32.615 [629/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:05:32.615 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:05:32.615 [631/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:05:32.872 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:05:32.872 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:05:32.872 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:05:33.130 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:05:33.388 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:05:33.646 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:05:33.904 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:05:33.904 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:05:33.904 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:05:33.904 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:05:34.162 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:05:34.162 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:34.420 [644/743] Linking static target drivers/librte_net_i40e.a 00:05:34.420 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:05:34.420 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:05:34.420 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:05:34.420 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:05:34.698 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:05:34.698 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:05:34.956 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.956 [652/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:05:34.956 [653/743] Linking target drivers/librte_net_i40e.so.23.0 00:05:34.956 [654/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:05:35.523 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:05:35.781 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:05:35.781 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:05:35.781 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:05:35.781 [659/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:05:35.781 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:05:35.781 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:05:36.039 [662/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:36.039 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:05:36.039 [664/743] Linking static target lib/librte_vhost.a 00:05:36.039 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:05:36.296 [666/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:05:36.296 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:05:36.552 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:05:36.808 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:05:36.808 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:05:37.069 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:05:37.345 [672/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.345 [673/743] Linking target lib/librte_vhost.so.23.0 00:05:37.602 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:05:37.858 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:05:37.858 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:05:38.115 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:05:38.115 [678/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:05:38.115 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:05:38.372 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:05:38.372 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:05:38.630 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:05:38.888 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:05:38.888 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:05:38.888 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:05:38.888 [686/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:05:39.145 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:05:39.145 [688/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:05:39.145 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:05:39.403 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:05:39.403 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:05:39.403 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:05:39.661 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:05:39.661 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:05:40.226 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:05:40.226 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:05:40.226 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:05:40.483 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:05:40.740 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:05:40.997 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:05:40.997 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:05:41.254 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:05:41.511 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:05:41.511 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:05:41.769 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:05:41.769 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:05:42.027 [707/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:05:42.027 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:05:42.027 [709/743] Linking static target lib/librte_pipeline.a 00:05:42.285 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:05:42.543 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:05:42.543 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:05:42.801 [713/743] Linking target app/dpdk-dumpcap 00:05:42.801 [714/743] Linking target app/dpdk-pdump 00:05:43.059 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:05:43.059 [716/743] Linking target app/dpdk-proc-info 00:05:43.059 [717/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:05:43.317 [718/743] Linking target app/dpdk-test-acl 00:05:43.317 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:05:43.317 [720/743] Linking target app/dpdk-test-bbdev 00:05:43.317 [721/743] Linking target app/dpdk-test-cmdline 00:05:43.317 [722/743] Linking target app/dpdk-test-compress-perf 00:05:43.576 [723/743] Linking target app/dpdk-test-crypto-perf 00:05:43.576 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:05:43.576 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:05:43.576 [726/743] Linking target app/dpdk-test-eventdev 00:05:43.576 [727/743] Linking target app/dpdk-test-fib 00:05:43.834 [728/743] Linking target app/dpdk-test-flow-perf 00:05:43.834 [729/743] Linking target app/dpdk-test-gpudev 00:05:43.834 [730/743] Linking target app/dpdk-test-pipeline 00:05:44.092 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:05:44.350 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:05:44.609 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:05:44.609 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:05:44.609 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:05:44.866 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:05:45.123 [737/743] Linking target app/dpdk-test-sad 00:05:45.123 [738/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.123 [739/743] Linking target app/dpdk-testpmd 00:05:45.123 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:05:45.123 [741/743] Linking target lib/librte_pipeline.so.23.0 00:05:45.382 [742/743] Linking target app/dpdk-test-regex 00:05:45.655 [743/743] Linking target app/dpdk-test-security-perf 00:05:45.655 15:32:34 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:05:45.655 15:32:34 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:05:45.655 15:32:34 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:05:45.655 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:45.955 [0/1] Installing files. 00:05:46.217 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:46.217 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.218 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.219 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.220 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:46.221 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:46.222 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:05:46.222 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:46.222 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:46.222 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.222 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:46.481 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:46.481 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:46.481 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:05:46.481 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:05:46.481 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.481 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.743 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.744 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.745 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.746 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:46.747 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:05:46.747 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:05:46.747 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:05:46.747 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:05:46.747 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:05:46.747 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:05:46.747 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:05:46.747 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:05:46.747 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:05:46.747 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:05:46.747 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:05:46.747 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:05:46.747 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:05:46.747 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:05:46.747 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:05:46.747 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:05:46.747 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:05:46.747 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:05:46.747 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:05:46.747 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:05:46.747 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:05:46.747 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:05:46.747 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:05:46.747 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:05:46.747 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:05:46.747 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:05:46.747 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:05:46.747 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:05:46.747 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:05:46.747 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:05:46.747 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:05:46.747 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:05:46.747 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:05:46.747 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:05:46.747 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:05:46.747 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:05:46.747 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:05:46.747 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:05:46.747 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:05:46.747 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:05:46.747 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:05:46.747 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:05:46.747 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:05:46.747 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:05:46.747 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:05:46.747 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:05:46.747 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:05:46.747 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:05:46.747 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:05:46.747 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:05:46.747 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:05:46.747 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:05:46.747 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:05:46.747 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:05:46.747 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:05:46.747 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:05:46.747 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:05:46.747 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:05:46.747 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:05:46.747 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:05:46.747 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:05:46.747 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:05:46.747 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:05:46.747 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:05:46.747 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:05:46.747 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:05:46.747 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:05:46.747 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:05:46.747 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:05:46.747 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:05:46.747 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:05:46.747 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:05:46.747 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:05:46.747 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:05:46.747 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:05:46.747 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:05:46.747 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:05:46.747 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:05:46.747 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:05:46.748 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:05:46.748 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:05:46.748 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:05:46.748 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:05:46.748 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:05:46.748 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:05:46.748 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:05:46.748 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:05:46.748 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:05:46.748 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:05:46.748 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:05:46.748 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:05:46.748 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:05:46.748 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:05:46.748 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:05:46.748 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:05:46.748 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:05:46.748 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:05:46.748 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:05:46.748 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:05:46.748 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:05:46.748 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:05:46.748 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:05:46.748 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:05:46.748 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:05:46.748 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:05:46.748 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:05:46.748 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:05:46.748 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:05:46.748 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:05:46.748 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:05:46.748 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:05:46.748 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:05:46.748 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:05:46.748 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:05:46.748 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:05:46.748 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:05:46.748 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:05:46.748 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:05:46.748 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:46.748 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:05:46.748 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:46.748 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:05:46.748 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:46.748 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:05:46.748 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:46.748 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:05:46.748 15:32:35 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:05:46.748 ************************************ 00:05:46.748 END TEST build_native_dpdk 00:05:46.748 ************************************ 00:05:46.748 15:32:35 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:46.748 00:05:46.748 real 1m2.866s 00:05:46.748 user 7m21.927s 00:05:46.748 sys 1m18.217s 00:05:46.748 15:32:35 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:46.748 15:32:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:05:46.748 15:32:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:46.748 15:32:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:46.748 15:32:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:05:47.007 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:05:47.007 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:05:47.007 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:05:47.007 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:47.572 Using 'verbs' RDMA provider 00:06:03.407 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:15.659 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:15.659 Creating mk/config.mk...done. 00:06:15.659 Creating mk/cc.flags.mk...done. 00:06:15.659 Type 'make' to build. 00:06:15.659 15:33:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:15.659 15:33:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:15.659 15:33:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:15.659 15:33:03 -- common/autotest_common.sh@10 -- $ set +x 00:06:15.659 ************************************ 00:06:15.659 START TEST make 00:06:15.659 ************************************ 00:06:15.659 15:33:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:15.659 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:15.659 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:15.659 meson setup builddir \ 00:06:15.659 -Dwith-libaio=enabled \ 00:06:15.659 -Dwith-liburing=enabled \ 00:06:15.659 -Dwith-libvfn=disabled \ 00:06:15.659 -Dwith-spdk=disabled \ 00:06:15.659 -Dexamples=false \ 00:06:15.659 -Dtests=false \ 00:06:15.659 -Dtools=false && \ 00:06:15.659 meson compile -C builddir && \ 00:06:15.659 cd -) 00:06:15.659 make[1]: Nothing to be done for 'all'. 00:06:17.658 The Meson build system 00:06:17.658 Version: 1.5.0 00:06:17.658 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:17.658 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:17.658 Build type: native build 00:06:17.658 Project name: xnvme 00:06:17.658 Project version: 0.7.5 00:06:17.658 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:17.658 C linker for the host machine: gcc ld.bfd 2.40-14 00:06:17.658 Host machine cpu family: x86_64 00:06:17.658 Host machine cpu: x86_64 00:06:17.658 Message: host_machine.system: linux 00:06:17.658 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:17.658 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:17.658 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:17.658 Run-time dependency threads found: YES 00:06:17.658 Has header "setupapi.h" : NO 00:06:17.658 Has header "linux/blkzoned.h" : YES 00:06:17.658 Has header "linux/blkzoned.h" : YES (cached) 00:06:17.658 Has header "libaio.h" : YES 00:06:17.658 Library aio found: YES 00:06:17.658 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:17.658 Run-time dependency liburing found: YES 2.2 00:06:17.658 Dependency libvfn skipped: feature with-libvfn disabled 00:06:17.658 Found CMake: /usr/bin/cmake (3.27.7) 00:06:17.658 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:17.658 Subproject spdk : skipped: feature with-spdk disabled 00:06:17.658 Run-time dependency appleframeworks found: NO (tried framework) 00:06:17.658 Run-time dependency appleframeworks found: NO (tried framework) 00:06:17.658 Library rt found: YES 00:06:17.658 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:17.658 Configuring xnvme_config.h using configuration 00:06:17.658 Configuring xnvme.spec using configuration 00:06:17.658 Run-time dependency bash-completion found: YES 2.11 00:06:17.658 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:17.658 Program cp found: YES (/usr/bin/cp) 00:06:17.658 Build targets in project: 3 00:06:17.658 00:06:17.658 xnvme 0.7.5 00:06:17.658 00:06:17.658 Subprojects 00:06:17.658 spdk : NO Feature 'with-spdk' disabled 00:06:17.658 00:06:17.658 User defined options 00:06:17.658 examples : false 00:06:17.658 tests : false 00:06:17.658 tools : false 00:06:17.658 with-libaio : enabled 00:06:17.658 with-liburing: enabled 00:06:17.658 with-libvfn : disabled 00:06:17.658 with-spdk : disabled 00:06:17.658 00:06:17.658 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:18.222 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:18.222 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:18.222 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:18.222 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:18.222 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:18.222 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:18.222 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:18.480 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:18.480 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:18.480 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:18.480 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:18.480 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:18.480 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:18.480 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:18.480 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:18.480 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:18.480 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:18.480 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:18.480 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:18.480 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:18.480 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:18.480 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:18.480 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:18.737 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:18.737 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:18.737 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:18.737 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:18.737 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:18.737 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:18.737 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:18.737 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:18.738 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:18.738 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:18.738 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:18.738 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:18.738 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:18.738 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:18.738 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:18.738 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:18.738 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:18.738 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:18.738 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:18.738 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:18.738 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:18.738 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:18.738 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:18.738 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:18.738 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:18.738 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:18.738 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:18.738 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:18.738 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:18.996 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:18.996 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:18.996 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:18.996 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:18.996 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:18.996 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:18.996 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:18.996 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:18.996 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:18.996 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:18.996 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:18.996 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:18.996 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:18.996 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:18.996 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:19.254 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:19.254 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:19.254 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:19.254 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:19.254 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:19.254 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:19.254 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:19.821 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:19.821 [75/76] Linking static target lib/libxnvme.a 00:06:19.821 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:19.821 INFO: autodetecting backend as ninja 00:06:19.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:19.821 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:16.047 CC lib/ut_mock/mock.o 00:07:16.047 CC lib/ut/ut.o 00:07:16.047 CC lib/log/log.o 00:07:16.047 CC lib/log/log_flags.o 00:07:16.047 CC lib/log/log_deprecated.o 00:07:16.047 LIB libspdk_ut_mock.a 00:07:16.047 LIB libspdk_ut.a 00:07:16.047 LIB libspdk_log.a 00:07:16.047 SO libspdk_ut_mock.so.6.0 00:07:16.047 SO libspdk_ut.so.2.0 00:07:16.047 SO libspdk_log.so.7.1 00:07:16.047 SYMLINK libspdk_ut_mock.so 00:07:16.047 SYMLINK libspdk_ut.so 00:07:16.047 SYMLINK libspdk_log.so 00:07:16.047 CXX lib/trace_parser/trace.o 00:07:16.047 CC lib/dma/dma.o 00:07:16.047 CC lib/util/base64.o 00:07:16.048 CC lib/util/bit_array.o 00:07:16.048 CC lib/util/crc16.o 00:07:16.048 CC lib/util/cpuset.o 00:07:16.048 CC lib/ioat/ioat.o 00:07:16.048 CC lib/util/crc32.o 00:07:16.048 CC lib/util/crc32c.o 00:07:16.048 CC lib/vfio_user/host/vfio_user_pci.o 00:07:16.048 CC lib/util/crc32_ieee.o 00:07:16.048 CC lib/vfio_user/host/vfio_user.o 00:07:16.048 CC lib/util/crc64.o 00:07:16.048 LIB libspdk_dma.a 00:07:16.048 CC lib/util/dif.o 00:07:16.048 SO libspdk_dma.so.5.0 00:07:16.048 CC lib/util/fd.o 00:07:16.048 CC lib/util/fd_group.o 00:07:16.048 SYMLINK libspdk_dma.so 00:07:16.048 CC lib/util/file.o 00:07:16.048 CC lib/util/hexlify.o 00:07:16.048 CC lib/util/iov.o 00:07:16.048 LIB libspdk_ioat.a 00:07:16.048 SO libspdk_ioat.so.7.0 00:07:16.048 CC lib/util/math.o 00:07:16.048 CC lib/util/net.o 00:07:16.048 SYMLINK libspdk_ioat.so 00:07:16.048 CC lib/util/pipe.o 00:07:16.048 CC lib/util/strerror_tls.o 00:07:16.048 LIB libspdk_vfio_user.a 00:07:16.048 SO libspdk_vfio_user.so.5.0 00:07:16.048 CC lib/util/string.o 00:07:16.048 CC lib/util/uuid.o 00:07:16.048 SYMLINK libspdk_vfio_user.so 00:07:16.048 CC lib/util/xor.o 00:07:16.048 CC lib/util/zipf.o 00:07:16.048 CC lib/util/md5.o 00:07:16.048 LIB libspdk_util.a 00:07:16.048 SO libspdk_util.so.10.1 00:07:16.048 LIB libspdk_trace_parser.a 00:07:16.048 SYMLINK libspdk_util.so 00:07:16.048 SO libspdk_trace_parser.so.6.0 00:07:16.048 SYMLINK libspdk_trace_parser.so 00:07:16.048 CC lib/vmd/vmd.o 00:07:16.048 CC lib/vmd/led.o 00:07:16.048 CC lib/conf/conf.o 00:07:16.048 CC lib/rdma_utils/rdma_utils.o 00:07:16.048 CC lib/idxd/idxd.o 00:07:16.048 CC lib/idxd/idxd_user.o 00:07:16.048 CC lib/idxd/idxd_kernel.o 00:07:16.048 CC lib/json/json_parse.o 00:07:16.048 CC lib/json/json_util.o 00:07:16.048 CC lib/env_dpdk/env.o 00:07:16.048 CC lib/env_dpdk/memory.o 00:07:16.048 LIB libspdk_conf.a 00:07:16.048 CC lib/json/json_write.o 00:07:16.048 CC lib/env_dpdk/pci.o 00:07:16.048 CC lib/env_dpdk/init.o 00:07:16.048 SO libspdk_conf.so.6.0 00:07:16.048 LIB libspdk_rdma_utils.a 00:07:16.048 CC lib/env_dpdk/threads.o 00:07:16.048 SO libspdk_rdma_utils.so.1.0 00:07:16.048 SYMLINK libspdk_conf.so 00:07:16.048 CC lib/env_dpdk/pci_ioat.o 00:07:16.048 SYMLINK libspdk_rdma_utils.so 00:07:16.048 CC lib/env_dpdk/pci_virtio.o 00:07:16.048 CC lib/env_dpdk/pci_vmd.o 00:07:16.048 CC lib/env_dpdk/pci_idxd.o 00:07:16.048 CC lib/env_dpdk/pci_event.o 00:07:16.048 CC lib/env_dpdk/sigbus_handler.o 00:07:16.048 LIB libspdk_json.a 00:07:16.048 CC lib/env_dpdk/pci_dpdk.o 00:07:16.048 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:16.048 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:16.048 SO libspdk_json.so.6.0 00:07:16.048 SYMLINK libspdk_json.so 00:07:16.048 LIB libspdk_idxd.a 00:07:16.048 LIB libspdk_vmd.a 00:07:16.048 SO libspdk_idxd.so.12.1 00:07:16.048 SO libspdk_vmd.so.6.0 00:07:16.048 CC lib/rdma_provider/common.o 00:07:16.048 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:16.048 SYMLINK libspdk_idxd.so 00:07:16.048 SYMLINK libspdk_vmd.so 00:07:16.048 CC lib/jsonrpc/jsonrpc_server.o 00:07:16.048 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:16.048 CC lib/jsonrpc/jsonrpc_client.o 00:07:16.048 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:16.048 LIB libspdk_rdma_provider.a 00:07:16.048 SO libspdk_rdma_provider.so.7.0 00:07:16.048 LIB libspdk_jsonrpc.a 00:07:16.048 SYMLINK libspdk_rdma_provider.so 00:07:16.048 SO libspdk_jsonrpc.so.6.0 00:07:16.048 SYMLINK libspdk_jsonrpc.so 00:07:16.048 CC lib/rpc/rpc.o 00:07:16.048 LIB libspdk_rpc.a 00:07:16.048 SO libspdk_rpc.so.6.0 00:07:16.048 LIB libspdk_env_dpdk.a 00:07:16.048 SYMLINK libspdk_rpc.so 00:07:16.048 SO libspdk_env_dpdk.so.15.1 00:07:16.048 CC lib/trace/trace_flags.o 00:07:16.048 CC lib/trace/trace.o 00:07:16.048 CC lib/trace/trace_rpc.o 00:07:16.048 CC lib/notify/notify.o 00:07:16.048 CC lib/notify/notify_rpc.o 00:07:16.048 CC lib/keyring/keyring_rpc.o 00:07:16.048 SYMLINK libspdk_env_dpdk.so 00:07:16.048 CC lib/keyring/keyring.o 00:07:16.048 LIB libspdk_notify.a 00:07:16.048 SO libspdk_notify.so.6.0 00:07:16.048 LIB libspdk_trace.a 00:07:16.048 SO libspdk_trace.so.11.0 00:07:16.048 LIB libspdk_keyring.a 00:07:16.048 SYMLINK libspdk_notify.so 00:07:16.048 SO libspdk_keyring.so.2.0 00:07:16.048 SYMLINK libspdk_trace.so 00:07:16.048 SYMLINK libspdk_keyring.so 00:07:16.048 CC lib/sock/sock.o 00:07:16.048 CC lib/sock/sock_rpc.o 00:07:16.048 CC lib/thread/thread.o 00:07:16.048 CC lib/thread/iobuf.o 00:07:16.305 LIB libspdk_sock.a 00:07:16.305 SO libspdk_sock.so.10.0 00:07:16.562 SYMLINK libspdk_sock.so 00:07:16.851 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:16.851 CC lib/nvme/nvme_fabric.o 00:07:16.851 CC lib/nvme/nvme_ctrlr.o 00:07:16.851 CC lib/nvme/nvme_ns_cmd.o 00:07:16.851 CC lib/nvme/nvme_pcie_common.o 00:07:16.851 CC lib/nvme/nvme_ns.o 00:07:16.851 CC lib/nvme/nvme.o 00:07:16.851 CC lib/nvme/nvme_qpair.o 00:07:16.851 CC lib/nvme/nvme_pcie.o 00:07:17.784 CC lib/nvme/nvme_quirks.o 00:07:17.784 CC lib/nvme/nvme_transport.o 00:07:17.784 CC lib/nvme/nvme_discovery.o 00:07:17.784 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:17.784 LIB libspdk_thread.a 00:07:17.784 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:17.784 CC lib/nvme/nvme_tcp.o 00:07:17.784 SO libspdk_thread.so.11.0 00:07:18.042 CC lib/nvme/nvme_opal.o 00:07:18.042 SYMLINK libspdk_thread.so 00:07:18.042 CC lib/nvme/nvme_io_msg.o 00:07:18.299 CC lib/nvme/nvme_poll_group.o 00:07:18.299 CC lib/nvme/nvme_zns.o 00:07:18.558 CC lib/nvme/nvme_stubs.o 00:07:18.558 CC lib/nvme/nvme_auth.o 00:07:18.558 CC lib/nvme/nvme_cuse.o 00:07:18.558 CC lib/nvme/nvme_rdma.o 00:07:18.840 CC lib/accel/accel.o 00:07:18.840 CC lib/accel/accel_rpc.o 00:07:18.840 CC lib/accel/accel_sw.o 00:07:19.098 CC lib/blob/blobstore.o 00:07:19.355 CC lib/virtio/virtio.o 00:07:19.355 CC lib/init/json_config.o 00:07:19.355 CC lib/virtio/virtio_vhost_user.o 00:07:19.613 CC lib/virtio/virtio_vfio_user.o 00:07:19.613 CC lib/init/subsystem.o 00:07:19.613 CC lib/init/subsystem_rpc.o 00:07:19.872 CC lib/init/rpc.o 00:07:19.872 CC lib/virtio/virtio_pci.o 00:07:19.872 CC lib/blob/request.o 00:07:19.872 CC lib/blob/zeroes.o 00:07:19.872 CC lib/blob/blob_bs_dev.o 00:07:19.872 LIB libspdk_init.a 00:07:20.130 SO libspdk_init.so.6.0 00:07:20.130 CC lib/fsdev/fsdev.o 00:07:20.130 CC lib/fsdev/fsdev_io.o 00:07:20.130 SYMLINK libspdk_init.so 00:07:20.130 CC lib/fsdev/fsdev_rpc.o 00:07:20.130 LIB libspdk_accel.a 00:07:20.130 SO libspdk_accel.so.16.0 00:07:20.388 LIB libspdk_virtio.a 00:07:20.388 SYMLINK libspdk_accel.so 00:07:20.388 CC lib/event/app.o 00:07:20.388 CC lib/event/log_rpc.o 00:07:20.388 CC lib/event/reactor.o 00:07:20.388 CC lib/event/app_rpc.o 00:07:20.388 SO libspdk_virtio.so.7.0 00:07:20.388 SYMLINK libspdk_virtio.so 00:07:20.388 CC lib/event/scheduler_static.o 00:07:20.645 CC lib/bdev/bdev.o 00:07:20.645 CC lib/bdev/bdev_rpc.o 00:07:20.645 CC lib/bdev/bdev_zone.o 00:07:20.645 CC lib/bdev/part.o 00:07:20.645 LIB libspdk_nvme.a 00:07:20.645 CC lib/bdev/scsi_nvme.o 00:07:20.980 SO libspdk_nvme.so.15.0 00:07:20.980 LIB libspdk_fsdev.a 00:07:20.980 SO libspdk_fsdev.so.2.0 00:07:20.980 LIB libspdk_event.a 00:07:20.980 SYMLINK libspdk_fsdev.so 00:07:20.980 SO libspdk_event.so.14.0 00:07:21.238 SYMLINK libspdk_event.so 00:07:21.238 SYMLINK libspdk_nvme.so 00:07:21.238 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:22.172 LIB libspdk_fuse_dispatcher.a 00:07:22.172 SO libspdk_fuse_dispatcher.so.1.0 00:07:22.429 SYMLINK libspdk_fuse_dispatcher.so 00:07:24.326 LIB libspdk_blob.a 00:07:24.326 SO libspdk_blob.so.12.0 00:07:24.326 SYMLINK libspdk_blob.so 00:07:24.326 LIB libspdk_bdev.a 00:07:24.583 SO libspdk_bdev.so.17.0 00:07:24.583 CC lib/blobfs/tree.o 00:07:24.583 CC lib/blobfs/blobfs.o 00:07:24.583 CC lib/lvol/lvol.o 00:07:24.583 SYMLINK libspdk_bdev.so 00:07:24.842 CC lib/ublk/ublk.o 00:07:24.842 CC lib/nbd/nbd.o 00:07:24.842 CC lib/ublk/ublk_rpc.o 00:07:24.842 CC lib/nbd/nbd_rpc.o 00:07:24.842 CC lib/ftl/ftl_core.o 00:07:24.842 CC lib/ftl/ftl_init.o 00:07:24.842 CC lib/nvmf/ctrlr.o 00:07:24.842 CC lib/scsi/dev.o 00:07:25.100 CC lib/scsi/lun.o 00:07:25.100 CC lib/scsi/port.o 00:07:25.358 CC lib/nvmf/ctrlr_discovery.o 00:07:25.358 CC lib/ftl/ftl_layout.o 00:07:25.358 CC lib/scsi/scsi.o 00:07:25.358 CC lib/ftl/ftl_debug.o 00:07:25.616 LIB libspdk_nbd.a 00:07:25.616 CC lib/scsi/scsi_bdev.o 00:07:25.616 CC lib/scsi/scsi_pr.o 00:07:25.616 SO libspdk_nbd.so.7.0 00:07:25.616 SYMLINK libspdk_nbd.so 00:07:25.616 CC lib/ftl/ftl_io.o 00:07:25.616 CC lib/ftl/ftl_sb.o 00:07:25.874 LIB libspdk_blobfs.a 00:07:25.874 SO libspdk_blobfs.so.11.0 00:07:25.874 LIB libspdk_ublk.a 00:07:25.874 CC lib/scsi/scsi_rpc.o 00:07:25.874 SO libspdk_ublk.so.3.0 00:07:25.874 SYMLINK libspdk_blobfs.so 00:07:25.874 CC lib/scsi/task.o 00:07:25.874 SYMLINK libspdk_ublk.so 00:07:25.874 CC lib/ftl/ftl_l2p.o 00:07:25.874 CC lib/ftl/ftl_l2p_flat.o 00:07:25.874 CC lib/nvmf/ctrlr_bdev.o 00:07:25.874 CC lib/nvmf/subsystem.o 00:07:26.132 CC lib/nvmf/nvmf.o 00:07:26.132 LIB libspdk_lvol.a 00:07:26.132 SO libspdk_lvol.so.11.0 00:07:26.132 CC lib/ftl/ftl_nv_cache.o 00:07:26.132 SYMLINK libspdk_lvol.so 00:07:26.132 CC lib/ftl/ftl_band.o 00:07:26.132 CC lib/ftl/ftl_band_ops.o 00:07:26.132 CC lib/ftl/ftl_writer.o 00:07:26.132 LIB libspdk_scsi.a 00:07:26.132 CC lib/ftl/ftl_rq.o 00:07:26.390 SO libspdk_scsi.so.9.0 00:07:26.390 SYMLINK libspdk_scsi.so 00:07:26.390 CC lib/ftl/ftl_reloc.o 00:07:26.649 CC lib/ftl/ftl_l2p_cache.o 00:07:26.649 CC lib/iscsi/conn.o 00:07:26.649 CC lib/ftl/ftl_p2l.o 00:07:26.649 CC lib/vhost/vhost.o 00:07:26.907 CC lib/ftl/ftl_p2l_log.o 00:07:27.164 CC lib/nvmf/nvmf_rpc.o 00:07:27.164 CC lib/vhost/vhost_rpc.o 00:07:27.421 CC lib/vhost/vhost_scsi.o 00:07:27.421 CC lib/iscsi/init_grp.o 00:07:27.421 CC lib/iscsi/iscsi.o 00:07:27.421 CC lib/nvmf/transport.o 00:07:27.679 CC lib/nvmf/tcp.o 00:07:27.679 CC lib/nvmf/stubs.o 00:07:27.679 CC lib/ftl/mngt/ftl_mngt.o 00:07:27.679 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:27.938 CC lib/vhost/vhost_blk.o 00:07:27.938 CC lib/vhost/rte_vhost_user.o 00:07:28.197 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:28.197 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:28.197 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:28.460 CC lib/nvmf/mdns_server.o 00:07:28.460 CC lib/nvmf/rdma.o 00:07:28.460 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:28.460 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:28.460 CC lib/iscsi/param.o 00:07:28.733 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:28.733 CC lib/iscsi/portal_grp.o 00:07:28.733 CC lib/nvmf/auth.o 00:07:28.997 CC lib/iscsi/tgt_node.o 00:07:28.997 CC lib/iscsi/iscsi_subsystem.o 00:07:28.997 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:28.997 CC lib/iscsi/iscsi_rpc.o 00:07:29.255 CC lib/iscsi/task.o 00:07:29.255 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:29.255 LIB libspdk_vhost.a 00:07:29.255 SO libspdk_vhost.so.8.0 00:07:29.255 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:29.513 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:29.513 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:29.513 CC lib/ftl/utils/ftl_conf.o 00:07:29.514 SYMLINK libspdk_vhost.so 00:07:29.514 CC lib/ftl/utils/ftl_md.o 00:07:29.514 CC lib/ftl/utils/ftl_mempool.o 00:07:29.514 CC lib/ftl/utils/ftl_bitmap.o 00:07:29.514 LIB libspdk_iscsi.a 00:07:29.772 CC lib/ftl/utils/ftl_property.o 00:07:29.772 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:29.772 SO libspdk_iscsi.so.8.0 00:07:29.772 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:29.772 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:30.030 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:30.030 SYMLINK libspdk_iscsi.so 00:07:30.030 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:30.030 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:30.030 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:30.030 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:30.030 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:30.030 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:30.030 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:30.289 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:30.289 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:30.289 CC lib/ftl/base/ftl_base_dev.o 00:07:30.289 CC lib/ftl/base/ftl_base_bdev.o 00:07:30.289 CC lib/ftl/ftl_trace.o 00:07:30.547 LIB libspdk_ftl.a 00:07:30.805 SO libspdk_ftl.so.9.0 00:07:31.064 SYMLINK libspdk_ftl.so 00:07:31.323 LIB libspdk_nvmf.a 00:07:31.582 SO libspdk_nvmf.so.20.0 00:07:31.840 SYMLINK libspdk_nvmf.so 00:07:32.099 CC module/env_dpdk/env_dpdk_rpc.o 00:07:32.099 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:32.357 CC module/accel/ioat/accel_ioat.o 00:07:32.357 CC module/scheduler/gscheduler/gscheduler.o 00:07:32.357 CC module/blob/bdev/blob_bdev.o 00:07:32.357 CC module/sock/posix/posix.o 00:07:32.357 CC module/accel/error/accel_error.o 00:07:32.357 CC module/keyring/file/keyring.o 00:07:32.357 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:32.357 CC module/fsdev/aio/fsdev_aio.o 00:07:32.357 LIB libspdk_env_dpdk_rpc.a 00:07:32.357 SO libspdk_env_dpdk_rpc.so.6.0 00:07:32.357 LIB libspdk_scheduler_gscheduler.a 00:07:32.357 CC module/keyring/file/keyring_rpc.o 00:07:32.357 SYMLINK libspdk_env_dpdk_rpc.so 00:07:32.357 CC module/accel/ioat/accel_ioat_rpc.o 00:07:32.357 LIB libspdk_scheduler_dpdk_governor.a 00:07:32.357 SO libspdk_scheduler_gscheduler.so.4.0 00:07:32.357 CC module/accel/error/accel_error_rpc.o 00:07:32.357 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:32.616 SYMLINK libspdk_scheduler_gscheduler.so 00:07:32.616 LIB libspdk_scheduler_dynamic.a 00:07:32.616 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:32.616 SO libspdk_scheduler_dynamic.so.4.0 00:07:32.616 LIB libspdk_keyring_file.a 00:07:32.616 LIB libspdk_blob_bdev.a 00:07:32.616 LIB libspdk_accel_ioat.a 00:07:32.616 SO libspdk_accel_ioat.so.6.0 00:07:32.616 SO libspdk_keyring_file.so.2.0 00:07:32.616 LIB libspdk_accel_error.a 00:07:32.616 SO libspdk_blob_bdev.so.12.0 00:07:32.616 SO libspdk_accel_error.so.2.0 00:07:32.616 SYMLINK libspdk_scheduler_dynamic.so 00:07:32.616 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:32.616 SYMLINK libspdk_keyring_file.so 00:07:32.616 CC module/fsdev/aio/linux_aio_mgr.o 00:07:32.616 SYMLINK libspdk_accel_ioat.so 00:07:32.616 CC module/accel/iaa/accel_iaa.o 00:07:32.616 CC module/accel/dsa/accel_dsa.o 00:07:32.616 CC module/accel/iaa/accel_iaa_rpc.o 00:07:32.616 SYMLINK libspdk_accel_error.so 00:07:32.616 CC module/accel/dsa/accel_dsa_rpc.o 00:07:32.616 SYMLINK libspdk_blob_bdev.so 00:07:32.878 CC module/keyring/linux/keyring.o 00:07:32.878 CC module/keyring/linux/keyring_rpc.o 00:07:32.878 LIB libspdk_keyring_linux.a 00:07:32.878 LIB libspdk_accel_iaa.a 00:07:32.878 SO libspdk_keyring_linux.so.1.0 00:07:33.136 SO libspdk_accel_iaa.so.3.0 00:07:33.136 SYMLINK libspdk_keyring_linux.so 00:07:33.136 SYMLINK libspdk_accel_iaa.so 00:07:33.136 LIB libspdk_accel_dsa.a 00:07:33.136 CC module/bdev/gpt/gpt.o 00:07:33.136 SO libspdk_accel_dsa.so.5.0 00:07:33.136 CC module/bdev/error/vbdev_error.o 00:07:33.136 CC module/bdev/delay/vbdev_delay.o 00:07:33.136 CC module/bdev/lvol/vbdev_lvol.o 00:07:33.136 CC module/blobfs/bdev/blobfs_bdev.o 00:07:33.136 CC module/bdev/malloc/bdev_malloc.o 00:07:33.136 LIB libspdk_sock_posix.a 00:07:33.136 SYMLINK libspdk_accel_dsa.so 00:07:33.136 CC module/bdev/error/vbdev_error_rpc.o 00:07:33.395 CC module/bdev/null/bdev_null.o 00:07:33.395 SO libspdk_sock_posix.so.6.0 00:07:33.395 LIB libspdk_fsdev_aio.a 00:07:33.395 SYMLINK libspdk_sock_posix.so 00:07:33.395 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:33.395 CC module/bdev/gpt/vbdev_gpt.o 00:07:33.395 SO libspdk_fsdev_aio.so.1.0 00:07:33.395 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:33.395 SYMLINK libspdk_fsdev_aio.so 00:07:33.395 LIB libspdk_bdev_error.a 00:07:33.653 SO libspdk_bdev_error.so.6.0 00:07:33.653 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:33.653 SYMLINK libspdk_bdev_error.so 00:07:33.653 LIB libspdk_blobfs_bdev.a 00:07:33.653 CC module/bdev/null/bdev_null_rpc.o 00:07:33.653 CC module/bdev/nvme/bdev_nvme.o 00:07:33.653 CC module/bdev/passthru/vbdev_passthru.o 00:07:33.653 SO libspdk_blobfs_bdev.so.6.0 00:07:33.653 LIB libspdk_bdev_malloc.a 00:07:33.653 LIB libspdk_bdev_gpt.a 00:07:33.653 SYMLINK libspdk_blobfs_bdev.so 00:07:33.653 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:33.653 SO libspdk_bdev_malloc.so.6.0 00:07:33.653 SO libspdk_bdev_gpt.so.6.0 00:07:33.911 CC module/bdev/raid/bdev_raid.o 00:07:33.911 LIB libspdk_bdev_delay.a 00:07:33.911 CC module/bdev/split/vbdev_split.o 00:07:33.911 SO libspdk_bdev_delay.so.6.0 00:07:33.911 SYMLINK libspdk_bdev_malloc.so 00:07:33.911 SYMLINK libspdk_bdev_gpt.so 00:07:33.911 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:33.911 LIB libspdk_bdev_null.a 00:07:33.911 SO libspdk_bdev_null.so.6.0 00:07:33.911 SYMLINK libspdk_bdev_delay.so 00:07:33.911 SYMLINK libspdk_bdev_null.so 00:07:33.911 CC module/bdev/split/vbdev_split_rpc.o 00:07:33.911 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:34.169 CC module/bdev/xnvme/bdev_xnvme.o 00:07:34.169 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:34.169 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:34.169 CC module/bdev/aio/bdev_aio.o 00:07:34.169 LIB libspdk_bdev_split.a 00:07:34.169 SO libspdk_bdev_split.so.6.0 00:07:34.169 LIB libspdk_bdev_passthru.a 00:07:34.169 CC module/bdev/nvme/nvme_rpc.o 00:07:34.428 SYMLINK libspdk_bdev_split.so 00:07:34.428 SO libspdk_bdev_passthru.so.6.0 00:07:34.428 LIB libspdk_bdev_lvol.a 00:07:34.428 SO libspdk_bdev_lvol.so.6.0 00:07:34.428 SYMLINK libspdk_bdev_passthru.so 00:07:34.428 LIB libspdk_bdev_xnvme.a 00:07:34.428 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:34.428 SO libspdk_bdev_xnvme.so.3.0 00:07:34.428 SYMLINK libspdk_bdev_lvol.so 00:07:34.428 CC module/bdev/aio/bdev_aio_rpc.o 00:07:34.428 CC module/bdev/ftl/bdev_ftl.o 00:07:34.687 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:34.687 SYMLINK libspdk_bdev_xnvme.so 00:07:34.687 CC module/bdev/raid/bdev_raid_rpc.o 00:07:34.687 CC module/bdev/nvme/bdev_mdns_client.o 00:07:34.687 LIB libspdk_bdev_aio.a 00:07:34.687 LIB libspdk_bdev_zone_block.a 00:07:34.687 CC module/bdev/iscsi/bdev_iscsi.o 00:07:34.687 SO libspdk_bdev_aio.so.6.0 00:07:34.687 SO libspdk_bdev_zone_block.so.6.0 00:07:34.687 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:34.687 SYMLINK libspdk_bdev_aio.so 00:07:34.687 SYMLINK libspdk_bdev_zone_block.so 00:07:34.687 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:34.687 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:34.945 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:34.945 CC module/bdev/nvme/vbdev_opal.o 00:07:34.945 CC module/bdev/raid/bdev_raid_sb.o 00:07:34.945 LIB libspdk_bdev_ftl.a 00:07:34.945 SO libspdk_bdev_ftl.so.6.0 00:07:34.945 SYMLINK libspdk_bdev_ftl.so 00:07:34.945 CC module/bdev/raid/raid0.o 00:07:34.945 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:35.203 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:35.203 LIB libspdk_bdev_iscsi.a 00:07:35.203 CC module/bdev/raid/raid1.o 00:07:35.203 CC module/bdev/raid/concat.o 00:07:35.203 SO libspdk_bdev_iscsi.so.6.0 00:07:35.203 SYMLINK libspdk_bdev_iscsi.so 00:07:35.462 LIB libspdk_bdev_virtio.a 00:07:35.462 SO libspdk_bdev_virtio.so.6.0 00:07:35.462 LIB libspdk_bdev_raid.a 00:07:35.720 SO libspdk_bdev_raid.so.6.0 00:07:35.720 SYMLINK libspdk_bdev_virtio.so 00:07:35.720 SYMLINK libspdk_bdev_raid.so 00:07:37.112 LIB libspdk_bdev_nvme.a 00:07:37.369 SO libspdk_bdev_nvme.so.7.1 00:07:37.627 SYMLINK libspdk_bdev_nvme.so 00:07:37.929 CC module/event/subsystems/sock/sock.o 00:07:38.209 CC module/event/subsystems/scheduler/scheduler.o 00:07:38.209 CC module/event/subsystems/keyring/keyring.o 00:07:38.209 CC module/event/subsystems/vmd/vmd.o 00:07:38.209 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:38.209 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:38.209 CC module/event/subsystems/fsdev/fsdev.o 00:07:38.209 CC module/event/subsystems/iobuf/iobuf.o 00:07:38.209 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:38.209 LIB libspdk_event_sock.a 00:07:38.209 LIB libspdk_event_vmd.a 00:07:38.209 LIB libspdk_event_keyring.a 00:07:38.209 LIB libspdk_event_scheduler.a 00:07:38.209 LIB libspdk_event_vhost_blk.a 00:07:38.209 LIB libspdk_event_fsdev.a 00:07:38.209 SO libspdk_event_sock.so.5.0 00:07:38.209 SO libspdk_event_vmd.so.6.0 00:07:38.209 SO libspdk_event_keyring.so.1.0 00:07:38.209 SO libspdk_event_scheduler.so.4.0 00:07:38.209 LIB libspdk_event_iobuf.a 00:07:38.209 SO libspdk_event_vhost_blk.so.3.0 00:07:38.209 SO libspdk_event_fsdev.so.1.0 00:07:38.209 SYMLINK libspdk_event_sock.so 00:07:38.209 SO libspdk_event_iobuf.so.3.0 00:07:38.209 SYMLINK libspdk_event_keyring.so 00:07:38.209 SYMLINK libspdk_event_vmd.so 00:07:38.209 SYMLINK libspdk_event_scheduler.so 00:07:38.209 SYMLINK libspdk_event_vhost_blk.so 00:07:38.209 SYMLINK libspdk_event_fsdev.so 00:07:38.468 SYMLINK libspdk_event_iobuf.so 00:07:38.726 CC module/event/subsystems/accel/accel.o 00:07:38.985 LIB libspdk_event_accel.a 00:07:38.985 SO libspdk_event_accel.so.6.0 00:07:38.985 SYMLINK libspdk_event_accel.so 00:07:39.242 CC module/event/subsystems/bdev/bdev.o 00:07:39.499 LIB libspdk_event_bdev.a 00:07:39.499 SO libspdk_event_bdev.so.6.0 00:07:39.757 SYMLINK libspdk_event_bdev.so 00:07:39.757 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:39.757 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:39.757 CC module/event/subsystems/ublk/ublk.o 00:07:39.757 CC module/event/subsystems/scsi/scsi.o 00:07:39.757 CC module/event/subsystems/nbd/nbd.o 00:07:40.015 LIB libspdk_event_ublk.a 00:07:40.015 LIB libspdk_event_nbd.a 00:07:40.015 SO libspdk_event_ublk.so.3.0 00:07:40.015 LIB libspdk_event_scsi.a 00:07:40.015 SO libspdk_event_nbd.so.6.0 00:07:40.015 SYMLINK libspdk_event_ublk.so 00:07:40.015 SO libspdk_event_scsi.so.6.0 00:07:40.273 SYMLINK libspdk_event_nbd.so 00:07:40.273 LIB libspdk_event_nvmf.a 00:07:40.273 SYMLINK libspdk_event_scsi.so 00:07:40.273 SO libspdk_event_nvmf.so.6.0 00:07:40.273 SYMLINK libspdk_event_nvmf.so 00:07:40.531 CC module/event/subsystems/iscsi/iscsi.o 00:07:40.531 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:40.531 LIB libspdk_event_iscsi.a 00:07:40.789 SO libspdk_event_iscsi.so.6.0 00:07:40.789 LIB libspdk_event_vhost_scsi.a 00:07:40.789 SO libspdk_event_vhost_scsi.so.3.0 00:07:40.789 SYMLINK libspdk_event_iscsi.so 00:07:40.789 SYMLINK libspdk_event_vhost_scsi.so 00:07:41.047 SO libspdk.so.6.0 00:07:41.047 SYMLINK libspdk.so 00:07:41.305 CC test/rpc_client/rpc_client_test.o 00:07:41.305 CC app/trace_record/trace_record.o 00:07:41.305 TEST_HEADER include/spdk/accel.h 00:07:41.305 TEST_HEADER include/spdk/accel_module.h 00:07:41.305 TEST_HEADER include/spdk/assert.h 00:07:41.305 TEST_HEADER include/spdk/barrier.h 00:07:41.305 TEST_HEADER include/spdk/base64.h 00:07:41.305 CXX app/trace/trace.o 00:07:41.305 TEST_HEADER include/spdk/bdev.h 00:07:41.305 TEST_HEADER include/spdk/bdev_module.h 00:07:41.305 TEST_HEADER include/spdk/bdev_zone.h 00:07:41.305 TEST_HEADER include/spdk/bit_array.h 00:07:41.305 TEST_HEADER include/spdk/bit_pool.h 00:07:41.305 TEST_HEADER include/spdk/blob_bdev.h 00:07:41.305 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:41.305 TEST_HEADER include/spdk/blobfs.h 00:07:41.305 TEST_HEADER include/spdk/blob.h 00:07:41.305 TEST_HEADER include/spdk/conf.h 00:07:41.305 TEST_HEADER include/spdk/config.h 00:07:41.305 TEST_HEADER include/spdk/cpuset.h 00:07:41.305 TEST_HEADER include/spdk/crc16.h 00:07:41.305 TEST_HEADER include/spdk/crc32.h 00:07:41.305 TEST_HEADER include/spdk/crc64.h 00:07:41.305 CC app/nvmf_tgt/nvmf_main.o 00:07:41.305 TEST_HEADER include/spdk/dif.h 00:07:41.305 TEST_HEADER include/spdk/dma.h 00:07:41.305 TEST_HEADER include/spdk/endian.h 00:07:41.305 TEST_HEADER include/spdk/env_dpdk.h 00:07:41.305 TEST_HEADER include/spdk/env.h 00:07:41.305 TEST_HEADER include/spdk/event.h 00:07:41.305 TEST_HEADER include/spdk/fd_group.h 00:07:41.305 TEST_HEADER include/spdk/fd.h 00:07:41.305 TEST_HEADER include/spdk/file.h 00:07:41.305 TEST_HEADER include/spdk/fsdev.h 00:07:41.305 TEST_HEADER include/spdk/fsdev_module.h 00:07:41.305 TEST_HEADER include/spdk/ftl.h 00:07:41.305 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:41.305 TEST_HEADER include/spdk/gpt_spec.h 00:07:41.305 TEST_HEADER include/spdk/hexlify.h 00:07:41.305 TEST_HEADER include/spdk/histogram_data.h 00:07:41.305 TEST_HEADER include/spdk/idxd.h 00:07:41.305 CC test/thread/poller_perf/poller_perf.o 00:07:41.305 TEST_HEADER include/spdk/idxd_spec.h 00:07:41.305 TEST_HEADER include/spdk/init.h 00:07:41.305 TEST_HEADER include/spdk/ioat.h 00:07:41.305 TEST_HEADER include/spdk/ioat_spec.h 00:07:41.305 TEST_HEADER include/spdk/iscsi_spec.h 00:07:41.305 TEST_HEADER include/spdk/json.h 00:07:41.305 CC examples/util/zipf/zipf.o 00:07:41.305 TEST_HEADER include/spdk/jsonrpc.h 00:07:41.305 TEST_HEADER include/spdk/keyring.h 00:07:41.305 TEST_HEADER include/spdk/keyring_module.h 00:07:41.305 TEST_HEADER include/spdk/likely.h 00:07:41.305 TEST_HEADER include/spdk/log.h 00:07:41.305 TEST_HEADER include/spdk/lvol.h 00:07:41.305 TEST_HEADER include/spdk/md5.h 00:07:41.305 TEST_HEADER include/spdk/memory.h 00:07:41.305 TEST_HEADER include/spdk/mmio.h 00:07:41.305 TEST_HEADER include/spdk/nbd.h 00:07:41.305 TEST_HEADER include/spdk/net.h 00:07:41.305 CC test/dma/test_dma/test_dma.o 00:07:41.305 TEST_HEADER include/spdk/notify.h 00:07:41.305 TEST_HEADER include/spdk/nvme.h 00:07:41.305 TEST_HEADER include/spdk/nvme_intel.h 00:07:41.305 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:41.305 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:41.305 TEST_HEADER include/spdk/nvme_spec.h 00:07:41.305 TEST_HEADER include/spdk/nvme_zns.h 00:07:41.305 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:41.305 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:41.305 TEST_HEADER include/spdk/nvmf.h 00:07:41.305 TEST_HEADER include/spdk/nvmf_spec.h 00:07:41.305 TEST_HEADER include/spdk/nvmf_transport.h 00:07:41.305 CC test/app/bdev_svc/bdev_svc.o 00:07:41.305 TEST_HEADER include/spdk/opal.h 00:07:41.305 TEST_HEADER include/spdk/opal_spec.h 00:07:41.305 TEST_HEADER include/spdk/pci_ids.h 00:07:41.305 TEST_HEADER include/spdk/pipe.h 00:07:41.305 TEST_HEADER include/spdk/queue.h 00:07:41.305 TEST_HEADER include/spdk/reduce.h 00:07:41.563 TEST_HEADER include/spdk/rpc.h 00:07:41.563 TEST_HEADER include/spdk/scheduler.h 00:07:41.563 TEST_HEADER include/spdk/scsi.h 00:07:41.563 TEST_HEADER include/spdk/scsi_spec.h 00:07:41.563 TEST_HEADER include/spdk/sock.h 00:07:41.563 TEST_HEADER include/spdk/stdinc.h 00:07:41.563 TEST_HEADER include/spdk/string.h 00:07:41.563 TEST_HEADER include/spdk/thread.h 00:07:41.563 TEST_HEADER include/spdk/trace.h 00:07:41.563 TEST_HEADER include/spdk/trace_parser.h 00:07:41.563 CC test/env/mem_callbacks/mem_callbacks.o 00:07:41.563 TEST_HEADER include/spdk/tree.h 00:07:41.563 TEST_HEADER include/spdk/ublk.h 00:07:41.563 TEST_HEADER include/spdk/util.h 00:07:41.563 TEST_HEADER include/spdk/uuid.h 00:07:41.563 TEST_HEADER include/spdk/version.h 00:07:41.563 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:41.563 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:41.563 TEST_HEADER include/spdk/vhost.h 00:07:41.563 TEST_HEADER include/spdk/vmd.h 00:07:41.563 TEST_HEADER include/spdk/xor.h 00:07:41.563 LINK rpc_client_test 00:07:41.563 TEST_HEADER include/spdk/zipf.h 00:07:41.563 CXX test/cpp_headers/accel.o 00:07:41.563 LINK zipf 00:07:41.563 LINK poller_perf 00:07:41.563 LINK nvmf_tgt 00:07:41.563 LINK spdk_trace_record 00:07:41.563 LINK bdev_svc 00:07:41.821 CXX test/cpp_headers/accel_module.o 00:07:41.821 LINK mem_callbacks 00:07:41.821 LINK spdk_trace 00:07:41.821 CC test/app/histogram_perf/histogram_perf.o 00:07:41.821 CXX test/cpp_headers/assert.o 00:07:41.821 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:41.821 CC examples/ioat/perf/perf.o 00:07:41.821 CC app/iscsi_tgt/iscsi_tgt.o 00:07:41.821 CC examples/ioat/verify/verify.o 00:07:42.080 CC test/env/vtophys/vtophys.o 00:07:42.080 LINK histogram_perf 00:07:42.080 LINK test_dma 00:07:42.080 CC test/event/event_perf/event_perf.o 00:07:42.080 CXX test/cpp_headers/barrier.o 00:07:42.080 LINK vtophys 00:07:42.080 CC app/spdk_tgt/spdk_tgt.o 00:07:42.080 CXX test/cpp_headers/base64.o 00:07:42.080 LINK iscsi_tgt 00:07:42.338 LINK verify 00:07:42.338 LINK ioat_perf 00:07:42.338 LINK event_perf 00:07:42.338 CC app/spdk_lspci/spdk_lspci.o 00:07:42.338 CC app/spdk_nvme_perf/perf.o 00:07:42.338 CXX test/cpp_headers/bdev.o 00:07:42.338 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:42.338 LINK spdk_tgt 00:07:42.595 LINK nvme_fuzz 00:07:42.595 CC test/app/jsoncat/jsoncat.o 00:07:42.595 CC test/app/stub/stub.o 00:07:42.595 CC test/event/reactor/reactor.o 00:07:42.595 LINK spdk_lspci 00:07:42.595 CC examples/vmd/lsvmd/lsvmd.o 00:07:42.595 LINK env_dpdk_post_init 00:07:42.595 CXX test/cpp_headers/bdev_module.o 00:07:42.595 CXX test/cpp_headers/bdev_zone.o 00:07:42.595 LINK jsoncat 00:07:42.595 LINK reactor 00:07:42.857 LINK lsvmd 00:07:42.857 LINK stub 00:07:42.857 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:42.857 CC test/env/memory/memory_ut.o 00:07:42.857 CXX test/cpp_headers/bit_array.o 00:07:42.857 CXX test/cpp_headers/bit_pool.o 00:07:42.857 CXX test/cpp_headers/blob_bdev.o 00:07:42.857 CC test/event/reactor_perf/reactor_perf.o 00:07:42.857 CC test/accel/dif/dif.o 00:07:43.115 CC examples/vmd/led/led.o 00:07:43.115 CC test/blobfs/mkfs/mkfs.o 00:07:43.115 LINK reactor_perf 00:07:43.115 CXX test/cpp_headers/blobfs_bdev.o 00:07:43.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:43.115 LINK led 00:07:43.115 CC examples/idxd/perf/perf.o 00:07:43.373 LINK mkfs 00:07:43.373 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:43.373 CC test/event/app_repeat/app_repeat.o 00:07:43.373 CXX test/cpp_headers/blobfs.o 00:07:43.373 CXX test/cpp_headers/blob.o 00:07:43.373 LINK spdk_nvme_perf 00:07:43.373 CXX test/cpp_headers/conf.o 00:07:43.631 LINK app_repeat 00:07:43.631 CXX test/cpp_headers/config.o 00:07:43.631 LINK idxd_perf 00:07:43.631 CC test/env/pci/pci_ut.o 00:07:43.631 CXX test/cpp_headers/cpuset.o 00:07:43.631 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:43.631 CC app/spdk_nvme_identify/identify.o 00:07:43.889 LINK memory_ut 00:07:43.889 CC test/event/scheduler/scheduler.o 00:07:43.889 CXX test/cpp_headers/crc16.o 00:07:43.889 LINK vhost_fuzz 00:07:43.889 LINK dif 00:07:43.889 LINK interrupt_tgt 00:07:44.148 CC test/lvol/esnap/esnap.o 00:07:44.148 CXX test/cpp_headers/crc32.o 00:07:44.148 LINK scheduler 00:07:44.148 CXX test/cpp_headers/crc64.o 00:07:44.148 LINK pci_ut 00:07:44.148 CC app/spdk_nvme_discover/discovery_aer.o 00:07:44.406 CC test/nvme/aer/aer.o 00:07:44.406 CXX test/cpp_headers/dif.o 00:07:44.406 CC examples/thread/thread/thread_ex.o 00:07:44.406 CC test/nvme/reset/reset.o 00:07:44.406 CC test/nvme/sgl/sgl.o 00:07:44.406 LINK spdk_nvme_discover 00:07:44.406 CXX test/cpp_headers/dma.o 00:07:44.663 CC app/spdk_top/spdk_top.o 00:07:44.663 LINK thread 00:07:44.663 LINK aer 00:07:44.663 CXX test/cpp_headers/endian.o 00:07:44.663 LINK reset 00:07:44.663 LINK spdk_nvme_identify 00:07:44.921 CC app/vhost/vhost.o 00:07:44.921 LINK sgl 00:07:44.921 CXX test/cpp_headers/env_dpdk.o 00:07:44.921 CC test/nvme/e2edp/nvme_dp.o 00:07:44.921 LINK iscsi_fuzz 00:07:44.921 LINK vhost 00:07:44.921 CC test/nvme/overhead/overhead.o 00:07:44.921 CC test/nvme/err_injection/err_injection.o 00:07:45.179 CC examples/sock/hello_world/hello_sock.o 00:07:45.179 CXX test/cpp_headers/env.o 00:07:45.179 CC test/bdev/bdevio/bdevio.o 00:07:45.179 LINK err_injection 00:07:45.179 CXX test/cpp_headers/event.o 00:07:45.437 LINK nvme_dp 00:07:45.437 CC app/spdk_dd/spdk_dd.o 00:07:45.437 LINK overhead 00:07:45.437 LINK hello_sock 00:07:45.437 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:45.437 CXX test/cpp_headers/fd_group.o 00:07:45.695 CC app/fio/nvme/fio_plugin.o 00:07:45.695 CC test/nvme/startup/startup.o 00:07:45.695 CC examples/accel/perf/accel_perf.o 00:07:45.695 LINK bdevio 00:07:45.695 CXX test/cpp_headers/fd.o 00:07:45.695 CC app/fio/bdev/fio_plugin.o 00:07:45.695 LINK spdk_top 00:07:45.953 LINK hello_fsdev 00:07:45.953 LINK spdk_dd 00:07:45.953 CXX test/cpp_headers/file.o 00:07:45.953 LINK startup 00:07:45.953 CXX test/cpp_headers/fsdev.o 00:07:45.953 CC test/nvme/reserve/reserve.o 00:07:45.953 CXX test/cpp_headers/fsdev_module.o 00:07:45.953 CXX test/cpp_headers/ftl.o 00:07:46.211 CXX test/cpp_headers/fuse_dispatcher.o 00:07:46.211 CC test/nvme/simple_copy/simple_copy.o 00:07:46.211 CC test/nvme/connect_stress/connect_stress.o 00:07:46.211 LINK reserve 00:07:46.211 CXX test/cpp_headers/gpt_spec.o 00:07:46.211 LINK accel_perf 00:07:46.468 LINK spdk_bdev 00:07:46.468 LINK spdk_nvme 00:07:46.468 LINK connect_stress 00:07:46.468 LINK simple_copy 00:07:46.468 CC examples/nvme/hello_world/hello_world.o 00:07:46.468 CXX test/cpp_headers/hexlify.o 00:07:46.468 CC examples/blob/hello_world/hello_blob.o 00:07:46.725 CC test/nvme/boot_partition/boot_partition.o 00:07:46.725 CC test/nvme/compliance/nvme_compliance.o 00:07:46.725 CC examples/blob/cli/blobcli.o 00:07:46.725 CC test/nvme/fused_ordering/fused_ordering.o 00:07:46.725 CXX test/cpp_headers/histogram_data.o 00:07:46.725 CC examples/bdev/hello_world/hello_bdev.o 00:07:46.725 LINK hello_world 00:07:46.725 LINK hello_blob 00:07:46.725 CC examples/bdev/bdevperf/bdevperf.o 00:07:46.725 LINK boot_partition 00:07:46.982 CXX test/cpp_headers/idxd.o 00:07:46.982 LINK fused_ordering 00:07:46.982 LINK hello_bdev 00:07:46.982 CC examples/nvme/reconnect/reconnect.o 00:07:46.982 LINK nvme_compliance 00:07:46.982 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:46.982 CC examples/nvme/arbitration/arbitration.o 00:07:46.983 CXX test/cpp_headers/idxd_spec.o 00:07:47.240 CC examples/nvme/hotplug/hotplug.o 00:07:47.240 LINK blobcli 00:07:47.240 CXX test/cpp_headers/init.o 00:07:47.240 CC test/nvme/fdp/fdp.o 00:07:47.240 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:47.498 CXX test/cpp_headers/ioat.o 00:07:47.498 LINK reconnect 00:07:47.498 LINK hotplug 00:07:47.498 LINK arbitration 00:07:47.498 LINK doorbell_aers 00:07:47.498 CC test/nvme/cuse/cuse.o 00:07:47.756 CXX test/cpp_headers/ioat_spec.o 00:07:47.756 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:47.756 CC examples/nvme/abort/abort.o 00:07:47.756 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:47.756 LINK fdp 00:07:47.756 LINK nvme_manage 00:07:47.756 CXX test/cpp_headers/iscsi_spec.o 00:07:47.756 CXX test/cpp_headers/json.o 00:07:47.756 LINK bdevperf 00:07:48.012 LINK cmb_copy 00:07:48.012 CXX test/cpp_headers/jsonrpc.o 00:07:48.012 CXX test/cpp_headers/keyring.o 00:07:48.012 CXX test/cpp_headers/keyring_module.o 00:07:48.013 LINK pmr_persistence 00:07:48.013 CXX test/cpp_headers/likely.o 00:07:48.013 CXX test/cpp_headers/log.o 00:07:48.013 CXX test/cpp_headers/lvol.o 00:07:48.270 CXX test/cpp_headers/md5.o 00:07:48.270 CXX test/cpp_headers/memory.o 00:07:48.270 CXX test/cpp_headers/mmio.o 00:07:48.270 CXX test/cpp_headers/nbd.o 00:07:48.270 CXX test/cpp_headers/net.o 00:07:48.270 CXX test/cpp_headers/notify.o 00:07:48.270 LINK abort 00:07:48.270 CXX test/cpp_headers/nvme.o 00:07:48.270 CXX test/cpp_headers/nvme_intel.o 00:07:48.270 CXX test/cpp_headers/nvme_ocssd.o 00:07:48.270 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:48.270 CXX test/cpp_headers/nvme_spec.o 00:07:48.527 CXX test/cpp_headers/nvme_zns.o 00:07:48.527 CXX test/cpp_headers/nvmf_cmd.o 00:07:48.527 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:48.527 CXX test/cpp_headers/nvmf.o 00:07:48.527 CXX test/cpp_headers/nvmf_spec.o 00:07:48.527 CXX test/cpp_headers/nvmf_transport.o 00:07:48.527 CXX test/cpp_headers/opal.o 00:07:48.527 CC examples/nvmf/nvmf/nvmf.o 00:07:48.527 CXX test/cpp_headers/opal_spec.o 00:07:48.527 CXX test/cpp_headers/pci_ids.o 00:07:48.785 CXX test/cpp_headers/pipe.o 00:07:48.785 CXX test/cpp_headers/queue.o 00:07:48.785 CXX test/cpp_headers/reduce.o 00:07:48.785 CXX test/cpp_headers/rpc.o 00:07:48.785 CXX test/cpp_headers/scheduler.o 00:07:48.785 CXX test/cpp_headers/scsi.o 00:07:48.785 CXX test/cpp_headers/scsi_spec.o 00:07:48.785 CXX test/cpp_headers/sock.o 00:07:48.785 CXX test/cpp_headers/stdinc.o 00:07:48.785 CXX test/cpp_headers/string.o 00:07:49.043 CXX test/cpp_headers/thread.o 00:07:49.043 LINK nvmf 00:07:49.043 CXX test/cpp_headers/trace.o 00:07:49.043 CXX test/cpp_headers/trace_parser.o 00:07:49.043 CXX test/cpp_headers/tree.o 00:07:49.043 CXX test/cpp_headers/ublk.o 00:07:49.043 CXX test/cpp_headers/util.o 00:07:49.043 CXX test/cpp_headers/uuid.o 00:07:49.043 CXX test/cpp_headers/version.o 00:07:49.043 CXX test/cpp_headers/vfio_user_pci.o 00:07:49.043 CXX test/cpp_headers/vfio_user_spec.o 00:07:49.301 CXX test/cpp_headers/vhost.o 00:07:49.301 CXX test/cpp_headers/vmd.o 00:07:49.301 CXX test/cpp_headers/xor.o 00:07:49.301 CXX test/cpp_headers/zipf.o 00:07:49.301 LINK cuse 00:07:51.832 LINK esnap 00:07:51.832 00:07:51.832 real 1m37.065s 00:07:51.832 user 7m43.261s 00:07:51.832 sys 1m24.565s 00:07:51.832 15:34:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:51.832 15:34:40 make -- common/autotest_common.sh@10 -- $ set +x 00:07:51.832 ************************************ 00:07:51.832 END TEST make 00:07:51.832 ************************************ 00:07:51.832 15:34:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:51.832 15:34:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:51.832 15:34:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:51.832 15:34:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.832 15:34:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:52.090 15:34:40 -- pm/common@44 -- $ pid=6082 00:07:52.090 15:34:40 -- pm/common@50 -- $ kill -TERM 6082 00:07:52.090 15:34:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.090 15:34:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:52.090 15:34:40 -- pm/common@44 -- $ pid=6084 00:07:52.090 15:34:40 -- pm/common@50 -- $ kill -TERM 6084 00:07:52.090 15:34:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:52.090 15:34:40 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:52.090 15:34:40 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.090 15:34:40 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.090 15:34:40 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.090 15:34:40 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.090 15:34:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.090 15:34:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.090 15:34:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.090 15:34:40 -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.090 15:34:40 -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.090 15:34:40 -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.090 15:34:40 -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.090 15:34:40 -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.090 15:34:40 -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.090 15:34:40 -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.090 15:34:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.090 15:34:40 -- scripts/common.sh@344 -- # case "$op" in 00:07:52.090 15:34:40 -- scripts/common.sh@345 -- # : 1 00:07:52.090 15:34:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.090 15:34:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.090 15:34:40 -- scripts/common.sh@365 -- # decimal 1 00:07:52.090 15:34:40 -- scripts/common.sh@353 -- # local d=1 00:07:52.090 15:34:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.091 15:34:40 -- scripts/common.sh@355 -- # echo 1 00:07:52.091 15:34:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.091 15:34:40 -- scripts/common.sh@366 -- # decimal 2 00:07:52.091 15:34:40 -- scripts/common.sh@353 -- # local d=2 00:07:52.091 15:34:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.091 15:34:40 -- scripts/common.sh@355 -- # echo 2 00:07:52.091 15:34:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.091 15:34:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.091 15:34:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.091 15:34:40 -- scripts/common.sh@368 -- # return 0 00:07:52.091 15:34:40 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.091 15:34:40 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.091 --rc genhtml_branch_coverage=1 00:07:52.091 --rc genhtml_function_coverage=1 00:07:52.091 --rc genhtml_legend=1 00:07:52.091 --rc geninfo_all_blocks=1 00:07:52.091 --rc geninfo_unexecuted_blocks=1 00:07:52.091 00:07:52.091 ' 00:07:52.091 15:34:40 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.091 --rc genhtml_branch_coverage=1 00:07:52.091 --rc genhtml_function_coverage=1 00:07:52.091 --rc genhtml_legend=1 00:07:52.091 --rc geninfo_all_blocks=1 00:07:52.091 --rc geninfo_unexecuted_blocks=1 00:07:52.091 00:07:52.091 ' 00:07:52.091 15:34:40 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.091 --rc genhtml_branch_coverage=1 00:07:52.091 --rc genhtml_function_coverage=1 00:07:52.091 --rc genhtml_legend=1 00:07:52.091 --rc geninfo_all_blocks=1 00:07:52.091 --rc geninfo_unexecuted_blocks=1 00:07:52.091 00:07:52.091 ' 00:07:52.091 15:34:40 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.091 --rc genhtml_branch_coverage=1 00:07:52.091 --rc genhtml_function_coverage=1 00:07:52.091 --rc genhtml_legend=1 00:07:52.091 --rc geninfo_all_blocks=1 00:07:52.091 --rc geninfo_unexecuted_blocks=1 00:07:52.091 00:07:52.091 ' 00:07:52.091 15:34:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.091 15:34:40 -- nvmf/common.sh@7 -- # uname -s 00:07:52.091 15:34:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.091 15:34:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.091 15:34:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.091 15:34:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.091 15:34:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.091 15:34:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.091 15:34:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.091 15:34:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.091 15:34:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.091 15:34:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.091 15:34:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8803dd04-8b7b-4aef-9a54-2657a611621c 00:07:52.091 15:34:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8803dd04-8b7b-4aef-9a54-2657a611621c 00:07:52.091 15:34:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.091 15:34:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.091 15:34:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:52.091 15:34:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.091 15:34:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.091 15:34:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.091 15:34:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.091 15:34:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.091 15:34:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.091 15:34:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.091 15:34:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.091 15:34:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.091 15:34:40 -- paths/export.sh@5 -- # export PATH 00:07:52.091 15:34:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.091 15:34:40 -- nvmf/common.sh@51 -- # : 0 00:07:52.091 15:34:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.091 15:34:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.091 15:34:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.091 15:34:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.091 15:34:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.091 15:34:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.091 15:34:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.091 15:34:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.091 15:34:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.091 15:34:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:52.091 15:34:40 -- spdk/autotest.sh@32 -- # uname -s 00:07:52.091 15:34:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:52.091 15:34:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:52.091 15:34:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:52.091 15:34:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:52.091 15:34:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:52.091 15:34:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:52.350 15:34:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:52.350 15:34:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:52.350 15:34:40 -- spdk/autotest.sh@48 -- # udevadm_pid=67238 00:07:52.350 15:34:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:52.350 15:34:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:52.350 15:34:40 -- pm/common@17 -- # local monitor 00:07:52.350 15:34:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.350 15:34:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:52.350 15:34:40 -- pm/common@25 -- # sleep 1 00:07:52.350 15:34:40 -- pm/common@21 -- # date +%s 00:07:52.350 15:34:40 -- pm/common@21 -- # date +%s 00:07:52.350 15:34:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499280 00:07:52.350 15:34:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733499280 00:07:52.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499280_collect-vmstat.pm.log 00:07:52.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733499280_collect-cpu-load.pm.log 00:07:53.343 15:34:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:53.343 15:34:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:53.343 15:34:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.343 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:53.343 15:34:41 -- spdk/autotest.sh@59 -- # create_test_list 00:07:53.343 15:34:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:53.343 15:34:41 -- common/autotest_common.sh@10 -- # set +x 00:07:53.343 15:34:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:53.343 15:34:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:53.343 15:34:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:53.343 15:34:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:53.343 15:34:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:53.343 15:34:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:53.343 15:34:41 -- common/autotest_common.sh@1457 -- # uname 00:07:53.343 15:34:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:53.343 15:34:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:53.343 15:34:41 -- common/autotest_common.sh@1477 -- # uname 00:07:53.343 15:34:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:53.343 15:34:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:53.343 15:34:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:53.343 lcov: LCOV version 1.15 00:07:53.343 15:34:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:11.449 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:11.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:29.541 15:35:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:29.541 15:35:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.541 15:35:16 -- common/autotest_common.sh@10 -- # set +x 00:08:29.541 15:35:16 -- spdk/autotest.sh@78 -- # rm -f 00:08:29.541 15:35:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:29.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.541 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:29.541 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:29.541 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:29.541 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:29.541 15:35:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:29.541 15:35:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:29.541 15:35:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:29.541 15:35:17 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:29.541 15:35:17 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:29.541 15:35:17 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:29.541 15:35:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:29.541 15:35:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:29.541 15:35:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:29.541 15:35:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:29.541 15:35:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:29.541 15:35:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:29.541 15:35:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.541 15:35:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.541 15:35:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:29.541 15:35:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:29.541 15:35:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:29.541 No valid GPT data, bailing 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # pt= 00:08:29.541 15:35:17 -- scripts/common.sh@395 -- # return 1 00:08:29.541 15:35:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:29.541 1+0 records in 00:08:29.541 1+0 records out 00:08:29.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495187 s, 212 MB/s 00:08:29.541 15:35:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.541 15:35:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.541 15:35:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:08:29.541 15:35:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:08:29.541 15:35:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:08:29.541 No valid GPT data, bailing 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # pt= 00:08:29.541 15:35:17 -- scripts/common.sh@395 -- # return 1 00:08:29.541 15:35:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:08:29.541 1+0 records in 00:08:29.541 1+0 records out 00:08:29.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500291 s, 210 MB/s 00:08:29.541 15:35:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.541 15:35:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.541 15:35:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:08:29.541 15:35:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:08:29.541 15:35:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:08:29.541 No valid GPT data, bailing 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:29.541 15:35:17 -- scripts/common.sh@394 -- # pt= 00:08:29.541 15:35:17 -- scripts/common.sh@395 -- # return 1 00:08:29.541 15:35:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:08:29.541 1+0 records in 00:08:29.541 1+0 records out 00:08:29.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0035482 s, 296 MB/s 00:08:29.542 15:35:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.542 15:35:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.542 15:35:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:29.542 15:35:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:29.542 15:35:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:29.542 No valid GPT data, bailing 00:08:29.542 15:35:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:29.542 15:35:18 -- scripts/common.sh@394 -- # pt= 00:08:29.542 15:35:18 -- scripts/common.sh@395 -- # return 1 00:08:29.542 15:35:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:29.542 1+0 records in 00:08:29.542 1+0 records out 00:08:29.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145344 s, 72.1 MB/s 00:08:29.542 15:35:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.542 15:35:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.542 15:35:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:29.542 15:35:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:29.542 15:35:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:29.542 No valid GPT data, bailing 00:08:29.542 15:35:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:29.542 15:35:18 -- scripts/common.sh@394 -- # pt= 00:08:29.542 15:35:18 -- scripts/common.sh@395 -- # return 1 00:08:29.542 15:35:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:29.542 1+0 records in 00:08:29.542 1+0 records out 00:08:29.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523499 s, 200 MB/s 00:08:29.542 15:35:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:29.542 15:35:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:29.542 15:35:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:29.542 15:35:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:29.542 15:35:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:29.542 No valid GPT data, bailing 00:08:29.542 15:35:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:29.542 15:35:18 -- scripts/common.sh@394 -- # pt= 00:08:29.542 15:35:18 -- scripts/common.sh@395 -- # return 1 00:08:29.542 15:35:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:29.800 1+0 records in 00:08:29.800 1+0 records out 00:08:29.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473034 s, 222 MB/s 00:08:29.800 15:35:18 -- spdk/autotest.sh@105 -- # sync 00:08:29.800 15:35:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:29.800 15:35:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:29.800 15:35:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:32.327 15:35:20 -- spdk/autotest.sh@111 -- # uname -s 00:08:32.327 15:35:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:32.327 15:35:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:32.327 15:35:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:32.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.152 Hugepages 00:08:33.152 node hugesize free / total 00:08:33.152 node0 1048576kB 0 / 0 00:08:33.152 node0 2048kB 0 / 0 00:08:33.152 00:08:33.152 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:33.152 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:33.152 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:33.410 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:33.410 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:33.410 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:08:33.410 15:35:22 -- spdk/autotest.sh@117 -- # uname -s 00:08:33.410 15:35:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:33.410 15:35:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:33.410 15:35:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:33.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.800 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.800 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.800 15:35:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:35.736 15:35:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:35.736 15:35:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:35.736 15:35:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:35.736 15:35:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:35.736 15:35:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:35.736 15:35:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:35.736 15:35:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:35.736 15:35:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:35.736 15:35:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:35.995 15:35:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:35.995 15:35:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:35.995 15:35:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.512 Waiting for block devices as requested 00:08:36.512 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.512 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.770 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.770 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.033 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:42.033 15:35:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:42.033 15:35:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1543 -- # continue 00:08:42.033 15:35:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:42.033 15:35:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1543 -- # continue 00:08:42.033 15:35:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:42.033 15:35:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1543 -- # continue 00:08:42.033 15:35:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:08:42.033 15:35:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:42.033 15:35:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:42.033 15:35:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:42.033 15:35:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:42.033 15:35:30 -- common/autotest_common.sh@1543 -- # continue 00:08:42.033 15:35:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:42.033 15:35:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.033 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:42.033 15:35:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:42.033 15:35:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.033 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:42.033 15:35:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.237 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.237 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.237 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.237 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.495 15:35:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:43.495 15:35:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.495 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 15:35:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:43.495 15:35:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:43.495 15:35:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:43.495 15:35:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:43.495 15:35:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:43.495 15:35:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:43.495 15:35:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:43.495 15:35:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:43.495 15:35:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:43.495 15:35:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:43.495 15:35:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.495 15:35:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.495 15:35:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:43.495 15:35:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:43.495 15:35:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.495 15:35:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:43.495 15:35:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:43.495 15:35:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:43.495 15:35:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:43.495 15:35:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:43.495 15:35:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:43.495 15:35:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:43.495 15:35:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:43.495 15:35:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:43.495 15:35:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:43.495 15:35:32 -- common/autotest_common.sh@1572 -- # return 0 00:08:43.495 15:35:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:43.495 15:35:32 -- common/autotest_common.sh@1580 -- # return 0 00:08:43.495 15:35:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:43.495 15:35:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:43.495 15:35:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:43.495 15:35:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:43.495 15:35:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:43.495 15:35:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.495 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 15:35:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:43.495 15:35:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:43.495 15:35:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.495 15:35:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.495 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 ************************************ 00:08:43.495 START TEST env 00:08:43.495 ************************************ 00:08:43.495 15:35:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:43.753 * Looking for test storage... 00:08:43.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.753 15:35:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.753 15:35:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.753 15:35:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.753 15:35:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.753 15:35:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.753 15:35:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.753 15:35:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.753 15:35:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.753 15:35:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.753 15:35:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.753 15:35:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.753 15:35:32 env -- scripts/common.sh@344 -- # case "$op" in 00:08:43.753 15:35:32 env -- scripts/common.sh@345 -- # : 1 00:08:43.753 15:35:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.753 15:35:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.753 15:35:32 env -- scripts/common.sh@365 -- # decimal 1 00:08:43.753 15:35:32 env -- scripts/common.sh@353 -- # local d=1 00:08:43.753 15:35:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.753 15:35:32 env -- scripts/common.sh@355 -- # echo 1 00:08:43.753 15:35:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.753 15:35:32 env -- scripts/common.sh@366 -- # decimal 2 00:08:43.753 15:35:32 env -- scripts/common.sh@353 -- # local d=2 00:08:43.753 15:35:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.753 15:35:32 env -- scripts/common.sh@355 -- # echo 2 00:08:43.753 15:35:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.753 15:35:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.753 15:35:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.753 15:35:32 env -- scripts/common.sh@368 -- # return 0 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.753 --rc genhtml_branch_coverage=1 00:08:43.753 --rc genhtml_function_coverage=1 00:08:43.753 --rc genhtml_legend=1 00:08:43.753 --rc geninfo_all_blocks=1 00:08:43.753 --rc geninfo_unexecuted_blocks=1 00:08:43.753 00:08:43.753 ' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.753 --rc genhtml_branch_coverage=1 00:08:43.753 --rc genhtml_function_coverage=1 00:08:43.753 --rc genhtml_legend=1 00:08:43.753 --rc geninfo_all_blocks=1 00:08:43.753 --rc geninfo_unexecuted_blocks=1 00:08:43.753 00:08:43.753 ' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.753 --rc genhtml_branch_coverage=1 00:08:43.753 --rc genhtml_function_coverage=1 00:08:43.753 --rc genhtml_legend=1 00:08:43.753 --rc geninfo_all_blocks=1 00:08:43.753 --rc geninfo_unexecuted_blocks=1 00:08:43.753 00:08:43.753 ' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.753 --rc genhtml_branch_coverage=1 00:08:43.753 --rc genhtml_function_coverage=1 00:08:43.753 --rc genhtml_legend=1 00:08:43.753 --rc geninfo_all_blocks=1 00:08:43.753 --rc geninfo_unexecuted_blocks=1 00:08:43.753 00:08:43.753 ' 00:08:43.753 15:35:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.753 15:35:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.753 15:35:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:43.753 ************************************ 00:08:43.753 START TEST env_memory 00:08:43.753 ************************************ 00:08:43.753 15:35:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:43.753 00:08:43.753 00:08:43.754 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.754 http://cunit.sourceforge.net/ 00:08:43.754 00:08:43.754 00:08:43.754 Suite: memory 00:08:43.754 Test: alloc and free memory map ...[2024-12-06 15:35:32.413658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:44.012 passed 00:08:44.012 Test: mem map translation ...[2024-12-06 15:35:32.474272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:44.012 [2024-12-06 15:35:32.474357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:44.012 [2024-12-06 15:35:32.474459] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:44.012 [2024-12-06 15:35:32.474489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:44.012 passed 00:08:44.012 Test: mem map registration ...[2024-12-06 15:35:32.573124] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:44.012 [2024-12-06 15:35:32.573212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:44.012 passed 00:08:44.270 Test: mem map adjacent registrations ...passed 00:08:44.270 00:08:44.270 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.270 suites 1 1 n/a 0 0 00:08:44.270 tests 4 4 4 0 0 00:08:44.270 asserts 152 152 152 0 n/a 00:08:44.270 00:08:44.270 Elapsed time = 0.369 seconds 00:08:44.270 00:08:44.270 real 0m0.411s 00:08:44.270 user 0m0.373s 00:08:44.270 sys 0m0.029s 00:08:44.270 15:35:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.270 15:35:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:44.270 ************************************ 00:08:44.270 END TEST env_memory 00:08:44.270 ************************************ 00:08:44.270 15:35:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:44.270 15:35:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.270 15:35:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.270 15:35:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:44.270 ************************************ 00:08:44.270 START TEST env_vtophys 00:08:44.270 ************************************ 00:08:44.270 15:35:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:44.270 EAL: lib.eal log level changed from notice to debug 00:08:44.270 EAL: Detected lcore 0 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 1 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 2 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 3 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 4 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 5 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 6 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 7 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 8 as core 0 on socket 0 00:08:44.270 EAL: Detected lcore 9 as core 0 on socket 0 00:08:44.270 EAL: Maximum logical cores by configuration: 128 00:08:44.270 EAL: Detected CPU lcores: 10 00:08:44.270 EAL: Detected NUMA nodes: 1 00:08:44.270 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:44.270 EAL: Detected shared linkage of DPDK 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:44.270 EAL: Registered [vdev] bus. 00:08:44.270 EAL: bus.vdev log level changed from disabled to notice 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:44.270 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:44.270 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:44.270 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:44.270 EAL: No shared files mode enabled, IPC will be disabled 00:08:44.270 EAL: No shared files mode enabled, IPC is disabled 00:08:44.270 EAL: Selected IOVA mode 'PA' 00:08:44.270 EAL: Probing VFIO support... 00:08:44.270 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:44.270 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:44.270 EAL: Ask a virtual area of 0x2e000 bytes 00:08:44.270 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:44.270 EAL: Setting up physically contiguous memory... 00:08:44.270 EAL: Setting maximum number of open files to 524288 00:08:44.270 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:44.270 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:44.270 EAL: Ask a virtual area of 0x61000 bytes 00:08:44.270 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:44.270 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:44.270 EAL: Ask a virtual area of 0x400000000 bytes 00:08:44.270 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:44.270 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:44.270 EAL: Ask a virtual area of 0x61000 bytes 00:08:44.270 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:44.270 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:44.270 EAL: Ask a virtual area of 0x400000000 bytes 00:08:44.270 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:44.270 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:44.270 EAL: Ask a virtual area of 0x61000 bytes 00:08:44.270 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:44.270 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:44.270 EAL: Ask a virtual area of 0x400000000 bytes 00:08:44.270 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:44.270 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:44.270 EAL: Ask a virtual area of 0x61000 bytes 00:08:44.270 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:44.270 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:44.270 EAL: Ask a virtual area of 0x400000000 bytes 00:08:44.270 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:44.270 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:44.270 EAL: Hugepages will be freed exactly as allocated. 00:08:44.270 EAL: No shared files mode enabled, IPC is disabled 00:08:44.270 EAL: No shared files mode enabled, IPC is disabled 00:08:44.529 EAL: TSC frequency is ~2200000 KHz 00:08:44.529 EAL: Main lcore 0 is ready (tid=7f71bd62da40;cpuset=[0]) 00:08:44.529 EAL: Trying to obtain current memory policy. 00:08:44.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:44.529 EAL: Restoring previous memory policy: 0 00:08:44.529 EAL: request: mp_malloc_sync 00:08:44.529 EAL: No shared files mode enabled, IPC is disabled 00:08:44.529 EAL: Heap on socket 0 was expanded by 2MB 00:08:44.529 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:44.529 EAL: No shared files mode enabled, IPC is disabled 00:08:44.529 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:44.529 EAL: Mem event callback 'spdk:(nil)' registered 00:08:44.529 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:44.529 00:08:44.529 00:08:44.529 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.529 http://cunit.sourceforge.net/ 00:08:44.529 00:08:44.529 00:08:44.529 Suite: components_suite 00:08:45.096 Test: vtophys_malloc_test ...passed 00:08:45.096 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 4MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 4MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 6MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 6MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 10MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 10MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 18MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 18MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 34MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 34MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was expanded by 66MB 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.096 EAL: No shared files mode enabled, IPC is disabled 00:08:45.096 EAL: Heap on socket 0 was shrunk by 66MB 00:08:45.096 EAL: Trying to obtain current memory policy. 00:08:45.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.096 EAL: Restoring previous memory policy: 4 00:08:45.096 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.096 EAL: request: mp_malloc_sync 00:08:45.097 EAL: No shared files mode enabled, IPC is disabled 00:08:45.097 EAL: Heap on socket 0 was expanded by 130MB 00:08:45.097 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.097 EAL: request: mp_malloc_sync 00:08:45.097 EAL: No shared files mode enabled, IPC is disabled 00:08:45.097 EAL: Heap on socket 0 was shrunk by 130MB 00:08:45.097 EAL: Trying to obtain current memory policy. 00:08:45.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.097 EAL: Restoring previous memory policy: 4 00:08:45.097 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.097 EAL: request: mp_malloc_sync 00:08:45.097 EAL: No shared files mode enabled, IPC is disabled 00:08:45.097 EAL: Heap on socket 0 was expanded by 258MB 00:08:45.355 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.355 EAL: request: mp_malloc_sync 00:08:45.355 EAL: No shared files mode enabled, IPC is disabled 00:08:45.355 EAL: Heap on socket 0 was shrunk by 258MB 00:08:45.355 EAL: Trying to obtain current memory policy. 00:08:45.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:45.614 EAL: Restoring previous memory policy: 4 00:08:45.614 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.614 EAL: request: mp_malloc_sync 00:08:45.614 EAL: No shared files mode enabled, IPC is disabled 00:08:45.614 EAL: Heap on socket 0 was expanded by 514MB 00:08:45.614 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.873 EAL: request: mp_malloc_sync 00:08:45.873 EAL: No shared files mode enabled, IPC is disabled 00:08:45.873 EAL: Heap on socket 0 was shrunk by 514MB 00:08:45.873 EAL: Trying to obtain current memory policy. 00:08:45.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:46.134 EAL: Restoring previous memory policy: 4 00:08:46.134 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.134 EAL: request: mp_malloc_sync 00:08:46.134 EAL: No shared files mode enabled, IPC is disabled 00:08:46.134 EAL: Heap on socket 0 was expanded by 1026MB 00:08:46.392 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.979 passed 00:08:46.979 00:08:46.979 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.979 suites 1 1 n/a 0 0 00:08:46.979 tests 2 2 2 0 0 00:08:46.979 asserts 5400 5400 5400 0 n/a 00:08:46.979 00:08:46.979 Elapsed time = 2.320 seconds 00:08:46.979 EAL: request: mp_malloc_sync 00:08:46.979 EAL: No shared files mode enabled, IPC is disabled 00:08:46.979 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:46.979 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.979 EAL: request: mp_malloc_sync 00:08:46.979 EAL: No shared files mode enabled, IPC is disabled 00:08:46.979 EAL: Heap on socket 0 was shrunk by 2MB 00:08:46.979 EAL: No shared files mode enabled, IPC is disabled 00:08:46.979 EAL: No shared files mode enabled, IPC is disabled 00:08:46.979 EAL: No shared files mode enabled, IPC is disabled 00:08:46.979 00:08:46.979 real 0m2.591s 00:08:46.979 user 0m1.335s 00:08:46.979 sys 0m1.111s 00:08:46.979 15:35:35 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.979 15:35:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:46.979 ************************************ 00:08:46.979 END TEST env_vtophys 00:08:46.979 ************************************ 00:08:46.979 15:35:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:46.979 15:35:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.979 15:35:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.979 15:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:08:46.979 ************************************ 00:08:46.979 START TEST env_pci 00:08:46.979 ************************************ 00:08:46.979 15:35:35 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:46.979 00:08:46.979 00:08:46.979 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.979 http://cunit.sourceforge.net/ 00:08:46.979 00:08:46.979 00:08:46.979 Suite: pci 00:08:46.979 Test: pci_hook ...[2024-12-06 15:35:35.487130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70086 has claimed it 00:08:46.979 passed 00:08:46.979 00:08:46.979 EAL: Cannot find device (10000:00:01.0) 00:08:46.979 EAL: Failed to attach device on primary process 00:08:46.979 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.979 suites 1 1 n/a 0 0 00:08:46.979 tests 1 1 1 0 0 00:08:46.979 asserts 25 25 25 0 n/a 00:08:46.979 00:08:46.979 Elapsed time = 0.008 seconds 00:08:46.979 00:08:46.979 real 0m0.075s 00:08:46.979 user 0m0.032s 00:08:46.979 sys 0m0.042s 00:08:46.979 15:35:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.979 15:35:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:46.979 ************************************ 00:08:46.979 END TEST env_pci 00:08:46.979 ************************************ 00:08:46.979 15:35:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:46.979 15:35:35 env -- env/env.sh@15 -- # uname 00:08:46.979 15:35:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:46.979 15:35:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:46.979 15:35:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.979 15:35:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.979 15:35:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.979 15:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:08:46.979 ************************************ 00:08:46.979 START TEST env_dpdk_post_init 00:08:46.979 ************************************ 00:08:46.979 15:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:46.979 EAL: Detected CPU lcores: 10 00:08:46.979 EAL: Detected NUMA nodes: 1 00:08:46.979 EAL: Detected shared linkage of DPDK 00:08:46.979 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:46.979 EAL: Selected IOVA mode 'PA' 00:08:47.237 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:47.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:47.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:47.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:47.237 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:47.237 Starting DPDK initialization... 00:08:47.237 Starting SPDK post initialization... 00:08:47.237 SPDK NVMe probe 00:08:47.237 Attaching to 0000:00:10.0 00:08:47.237 Attaching to 0000:00:11.0 00:08:47.237 Attaching to 0000:00:12.0 00:08:47.237 Attaching to 0000:00:13.0 00:08:47.237 Attached to 0000:00:10.0 00:08:47.237 Attached to 0000:00:11.0 00:08:47.237 Attached to 0000:00:13.0 00:08:47.237 Attached to 0000:00:12.0 00:08:47.237 Cleaning up... 00:08:47.237 00:08:47.237 real 0m0.264s 00:08:47.237 user 0m0.080s 00:08:47.237 sys 0m0.085s 00:08:47.237 ************************************ 00:08:47.237 END TEST env_dpdk_post_init 00:08:47.237 ************************************ 00:08:47.237 15:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.237 15:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:47.237 15:35:35 env -- env/env.sh@26 -- # uname 00:08:47.237 15:35:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:47.237 15:35:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:47.237 15:35:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.237 15:35:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.237 15:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:08:47.237 ************************************ 00:08:47.237 START TEST env_mem_callbacks 00:08:47.237 ************************************ 00:08:47.237 15:35:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:47.495 EAL: Detected CPU lcores: 10 00:08:47.495 EAL: Detected NUMA nodes: 1 00:08:47.495 EAL: Detected shared linkage of DPDK 00:08:47.495 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:47.495 EAL: Selected IOVA mode 'PA' 00:08:47.495 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:47.495 00:08:47.495 00:08:47.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.495 http://cunit.sourceforge.net/ 00:08:47.495 00:08:47.495 00:08:47.495 Suite: memory 00:08:47.495 Test: test ... 00:08:47.495 register 0x200000200000 2097152 00:08:47.495 malloc 3145728 00:08:47.495 register 0x200000400000 4194304 00:08:47.495 buf 0x200000500000 len 3145728 PASSED 00:08:47.495 malloc 64 00:08:47.495 buf 0x2000004fff40 len 64 PASSED 00:08:47.495 malloc 4194304 00:08:47.495 register 0x200000800000 6291456 00:08:47.495 buf 0x200000a00000 len 4194304 PASSED 00:08:47.495 free 0x200000500000 3145728 00:08:47.495 free 0x2000004fff40 64 00:08:47.495 unregister 0x200000400000 4194304 PASSED 00:08:47.495 free 0x200000a00000 4194304 00:08:47.495 unregister 0x200000800000 6291456 PASSED 00:08:47.495 malloc 8388608 00:08:47.495 register 0x200000400000 10485760 00:08:47.495 buf 0x200000600000 len 8388608 PASSED 00:08:47.495 free 0x200000600000 8388608 00:08:47.495 unregister 0x200000400000 10485760 PASSED 00:08:47.495 passed 00:08:47.495 00:08:47.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.495 suites 1 1 n/a 0 0 00:08:47.495 tests 1 1 1 0 0 00:08:47.495 asserts 15 15 15 0 n/a 00:08:47.495 00:08:47.495 Elapsed time = 0.010 seconds 00:08:47.495 00:08:47.495 real 0m0.172s 00:08:47.495 user 0m0.035s 00:08:47.495 sys 0m0.034s 00:08:47.495 15:35:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.495 15:35:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 ************************************ 00:08:47.495 END TEST env_mem_callbacks 00:08:47.495 ************************************ 00:08:47.495 00:08:47.495 real 0m4.020s 00:08:47.495 user 0m2.046s 00:08:47.495 sys 0m1.586s 00:08:47.495 15:35:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.495 15:35:36 env -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 ************************************ 00:08:47.495 END TEST env 00:08:47.495 ************************************ 00:08:47.495 15:35:36 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:47.495 15:35:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.495 15:35:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.496 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:08:47.754 ************************************ 00:08:47.754 START TEST rpc 00:08:47.754 ************************************ 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:47.754 * Looking for test storage... 00:08:47.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.754 15:35:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.754 15:35:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.754 15:35:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.754 15:35:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.754 15:35:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.754 15:35:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:47.754 15:35:36 rpc -- scripts/common.sh@345 -- # : 1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.754 15:35:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.754 15:35:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@353 -- # local d=1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.754 15:35:36 rpc -- scripts/common.sh@355 -- # echo 1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.754 15:35:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@353 -- # local d=2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.754 15:35:36 rpc -- scripts/common.sh@355 -- # echo 2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.754 15:35:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.754 15:35:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.754 15:35:36 rpc -- scripts/common.sh@368 -- # return 0 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.754 --rc genhtml_branch_coverage=1 00:08:47.754 --rc genhtml_function_coverage=1 00:08:47.754 --rc genhtml_legend=1 00:08:47.754 --rc geninfo_all_blocks=1 00:08:47.754 --rc geninfo_unexecuted_blocks=1 00:08:47.754 00:08:47.754 ' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.754 --rc genhtml_branch_coverage=1 00:08:47.754 --rc genhtml_function_coverage=1 00:08:47.754 --rc genhtml_legend=1 00:08:47.754 --rc geninfo_all_blocks=1 00:08:47.754 --rc geninfo_unexecuted_blocks=1 00:08:47.754 00:08:47.754 ' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.754 --rc genhtml_branch_coverage=1 00:08:47.754 --rc genhtml_function_coverage=1 00:08:47.754 --rc genhtml_legend=1 00:08:47.754 --rc geninfo_all_blocks=1 00:08:47.754 --rc geninfo_unexecuted_blocks=1 00:08:47.754 00:08:47.754 ' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.754 --rc genhtml_branch_coverage=1 00:08:47.754 --rc genhtml_function_coverage=1 00:08:47.754 --rc genhtml_legend=1 00:08:47.754 --rc geninfo_all_blocks=1 00:08:47.754 --rc geninfo_unexecuted_blocks=1 00:08:47.754 00:08:47.754 ' 00:08:47.754 15:35:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70213 00:08:47.754 15:35:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.754 15:35:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:47.754 15:35:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70213 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 70213 ']' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.754 15:35:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.012 [2024-12-06 15:35:36.515730] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:48.012 [2024-12-06 15:35:36.515926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:08:48.012 [2024-12-06 15:35:36.682590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.271 [2024-12-06 15:35:36.757373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:48.271 [2024-12-06 15:35:36.757455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70213' to capture a snapshot of events at runtime. 00:08:48.271 [2024-12-06 15:35:36.757481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.271 [2024-12-06 15:35:36.757497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.271 [2024-12-06 15:35:36.757530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70213 for offline analysis/debug. 00:08:48.271 [2024-12-06 15:35:36.758251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.842 15:35:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.842 15:35:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:48.842 15:35:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:48.842 15:35:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:48.842 15:35:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:48.842 15:35:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:48.842 15:35:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.842 15:35:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.842 15:35:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.842 ************************************ 00:08:48.842 START TEST rpc_integrity 00:08:48.842 ************************************ 00:08:48.842 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:48.842 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:48.842 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.842 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:49.101 { 00:08:49.101 "name": "Malloc0", 00:08:49.101 "aliases": [ 00:08:49.101 "4f0dcd11-c748-4622-8e01-627b3cf7cb65" 00:08:49.101 ], 00:08:49.101 "product_name": "Malloc disk", 00:08:49.101 "block_size": 512, 00:08:49.101 "num_blocks": 16384, 00:08:49.101 "uuid": "4f0dcd11-c748-4622-8e01-627b3cf7cb65", 00:08:49.101 "assigned_rate_limits": { 00:08:49.101 "rw_ios_per_sec": 0, 00:08:49.101 "rw_mbytes_per_sec": 0, 00:08:49.101 "r_mbytes_per_sec": 0, 00:08:49.101 "w_mbytes_per_sec": 0 00:08:49.101 }, 00:08:49.101 "claimed": false, 00:08:49.101 "zoned": false, 00:08:49.101 "supported_io_types": { 00:08:49.101 "read": true, 00:08:49.101 "write": true, 00:08:49.101 "unmap": true, 00:08:49.101 "flush": true, 00:08:49.101 "reset": true, 00:08:49.101 "nvme_admin": false, 00:08:49.101 "nvme_io": false, 00:08:49.101 "nvme_io_md": false, 00:08:49.101 "write_zeroes": true, 00:08:49.101 "zcopy": true, 00:08:49.101 "get_zone_info": false, 00:08:49.101 "zone_management": false, 00:08:49.101 "zone_append": false, 00:08:49.101 "compare": false, 00:08:49.101 "compare_and_write": false, 00:08:49.101 "abort": true, 00:08:49.101 "seek_hole": false, 00:08:49.101 "seek_data": false, 00:08:49.101 "copy": true, 00:08:49.101 "nvme_iov_md": false 00:08:49.101 }, 00:08:49.101 "memory_domains": [ 00:08:49.101 { 00:08:49.101 "dma_device_id": "system", 00:08:49.101 "dma_device_type": 1 00:08:49.101 }, 00:08:49.101 { 00:08:49.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.101 "dma_device_type": 2 00:08:49.101 } 00:08:49.101 ], 00:08:49.101 "driver_specific": {} 00:08:49.101 } 00:08:49.101 ]' 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:49.101 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:49.101 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.102 [2024-12-06 15:35:37.690674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:49.102 [2024-12-06 15:35:37.690779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.102 [2024-12-06 15:35:37.690875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:49.102 [2024-12-06 15:35:37.690899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.102 [2024-12-06 15:35:37.694672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.102 [2024-12-06 15:35:37.694717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:49.102 Passthru0 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:49.102 { 00:08:49.102 "name": "Malloc0", 00:08:49.102 "aliases": [ 00:08:49.102 "4f0dcd11-c748-4622-8e01-627b3cf7cb65" 00:08:49.102 ], 00:08:49.102 "product_name": "Malloc disk", 00:08:49.102 "block_size": 512, 00:08:49.102 "num_blocks": 16384, 00:08:49.102 "uuid": "4f0dcd11-c748-4622-8e01-627b3cf7cb65", 00:08:49.102 "assigned_rate_limits": { 00:08:49.102 "rw_ios_per_sec": 0, 00:08:49.102 "rw_mbytes_per_sec": 0, 00:08:49.102 "r_mbytes_per_sec": 0, 00:08:49.102 "w_mbytes_per_sec": 0 00:08:49.102 }, 00:08:49.102 "claimed": true, 00:08:49.102 "claim_type": "exclusive_write", 00:08:49.102 "zoned": false, 00:08:49.102 "supported_io_types": { 00:08:49.102 "read": true, 00:08:49.102 "write": true, 00:08:49.102 "unmap": true, 00:08:49.102 "flush": true, 00:08:49.102 "reset": true, 00:08:49.102 "nvme_admin": false, 00:08:49.102 "nvme_io": false, 00:08:49.102 "nvme_io_md": false, 00:08:49.102 "write_zeroes": true, 00:08:49.102 "zcopy": true, 00:08:49.102 "get_zone_info": false, 00:08:49.102 "zone_management": false, 00:08:49.102 "zone_append": false, 00:08:49.102 "compare": false, 00:08:49.102 "compare_and_write": false, 00:08:49.102 "abort": true, 00:08:49.102 "seek_hole": false, 00:08:49.102 "seek_data": false, 00:08:49.102 "copy": true, 00:08:49.102 "nvme_iov_md": false 00:08:49.102 }, 00:08:49.102 "memory_domains": [ 00:08:49.102 { 00:08:49.102 "dma_device_id": "system", 00:08:49.102 "dma_device_type": 1 00:08:49.102 }, 00:08:49.102 { 00:08:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.102 "dma_device_type": 2 00:08:49.102 } 00:08:49.102 ], 00:08:49.102 "driver_specific": {} 00:08:49.102 }, 00:08:49.102 { 00:08:49.102 "name": "Passthru0", 00:08:49.102 "aliases": [ 00:08:49.102 "4231ea59-5aa0-5fc6-b6f9-2d5d0b28a380" 00:08:49.102 ], 00:08:49.102 "product_name": "passthru", 00:08:49.102 "block_size": 512, 00:08:49.102 "num_blocks": 16384, 00:08:49.102 "uuid": "4231ea59-5aa0-5fc6-b6f9-2d5d0b28a380", 00:08:49.102 "assigned_rate_limits": { 00:08:49.102 "rw_ios_per_sec": 0, 00:08:49.102 "rw_mbytes_per_sec": 0, 00:08:49.102 "r_mbytes_per_sec": 0, 00:08:49.102 "w_mbytes_per_sec": 0 00:08:49.102 }, 00:08:49.102 "claimed": false, 00:08:49.102 "zoned": false, 00:08:49.102 "supported_io_types": { 00:08:49.102 "read": true, 00:08:49.102 "write": true, 00:08:49.102 "unmap": true, 00:08:49.102 "flush": true, 00:08:49.102 "reset": true, 00:08:49.102 "nvme_admin": false, 00:08:49.102 "nvme_io": false, 00:08:49.102 "nvme_io_md": false, 00:08:49.102 "write_zeroes": true, 00:08:49.102 "zcopy": true, 00:08:49.102 "get_zone_info": false, 00:08:49.102 "zone_management": false, 00:08:49.102 "zone_append": false, 00:08:49.102 "compare": false, 00:08:49.102 "compare_and_write": false, 00:08:49.102 "abort": true, 00:08:49.102 "seek_hole": false, 00:08:49.102 "seek_data": false, 00:08:49.102 "copy": true, 00:08:49.102 "nvme_iov_md": false 00:08:49.102 }, 00:08:49.102 "memory_domains": [ 00:08:49.102 { 00:08:49.102 "dma_device_id": "system", 00:08:49.102 "dma_device_type": 1 00:08:49.102 }, 00:08:49.102 { 00:08:49.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.102 "dma_device_type": 2 00:08:49.102 } 00:08:49.102 ], 00:08:49.102 "driver_specific": { 00:08:49.102 "passthru": { 00:08:49.102 "name": "Passthru0", 00:08:49.102 "base_bdev_name": "Malloc0" 00:08:49.102 } 00:08:49.102 } 00:08:49.102 } 00:08:49.102 ]' 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.102 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.102 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.361 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:49.361 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:49.361 15:35:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:49.361 00:08:49.361 real 0m0.325s 00:08:49.361 user 0m0.218s 00:08:49.361 sys 0m0.035s 00:08:49.361 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.361 15:35:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 ************************************ 00:08:49.361 END TEST rpc_integrity 00:08:49.361 ************************************ 00:08:49.361 15:35:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:49.361 15:35:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.361 15:35:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.361 15:35:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 ************************************ 00:08:49.361 START TEST rpc_plugins 00:08:49.361 ************************************ 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:49.361 15:35:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.361 15:35:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:49.361 15:35:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 15:35:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.361 15:35:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:49.361 { 00:08:49.361 "name": "Malloc1", 00:08:49.361 "aliases": [ 00:08:49.361 "9441d745-4d06-4738-91b6-4f29020c1641" 00:08:49.361 ], 00:08:49.361 "product_name": "Malloc disk", 00:08:49.361 "block_size": 4096, 00:08:49.361 "num_blocks": 256, 00:08:49.361 "uuid": "9441d745-4d06-4738-91b6-4f29020c1641", 00:08:49.361 "assigned_rate_limits": { 00:08:49.361 "rw_ios_per_sec": 0, 00:08:49.361 "rw_mbytes_per_sec": 0, 00:08:49.361 "r_mbytes_per_sec": 0, 00:08:49.361 "w_mbytes_per_sec": 0 00:08:49.361 }, 00:08:49.361 "claimed": false, 00:08:49.361 "zoned": false, 00:08:49.361 "supported_io_types": { 00:08:49.361 "read": true, 00:08:49.361 "write": true, 00:08:49.361 "unmap": true, 00:08:49.361 "flush": true, 00:08:49.361 "reset": true, 00:08:49.361 "nvme_admin": false, 00:08:49.361 "nvme_io": false, 00:08:49.361 "nvme_io_md": false, 00:08:49.361 "write_zeroes": true, 00:08:49.361 "zcopy": true, 00:08:49.361 "get_zone_info": false, 00:08:49.361 "zone_management": false, 00:08:49.361 "zone_append": false, 00:08:49.361 "compare": false, 00:08:49.361 "compare_and_write": false, 00:08:49.361 "abort": true, 00:08:49.361 "seek_hole": false, 00:08:49.361 "seek_data": false, 00:08:49.361 "copy": true, 00:08:49.361 "nvme_iov_md": false 00:08:49.361 }, 00:08:49.361 "memory_domains": [ 00:08:49.361 { 00:08:49.361 "dma_device_id": "system", 00:08:49.361 "dma_device_type": 1 00:08:49.361 }, 00:08:49.361 { 00:08:49.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.361 "dma_device_type": 2 00:08:49.361 } 00:08:49.361 ], 00:08:49.361 "driver_specific": {} 00:08:49.361 } 00:08:49.361 ]' 00:08:49.361 15:35:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:49.361 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:49.361 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.361 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:49.361 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.361 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:49.361 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:49.621 15:35:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:49.621 00:08:49.621 real 0m0.169s 00:08:49.621 user 0m0.104s 00:08:49.621 sys 0m0.027s 00:08:49.621 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.621 15:35:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:49.621 ************************************ 00:08:49.621 END TEST rpc_plugins 00:08:49.621 ************************************ 00:08:49.621 15:35:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:49.621 15:35:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.621 15:35:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.621 15:35:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.621 ************************************ 00:08:49.621 START TEST rpc_trace_cmd_test 00:08:49.621 ************************************ 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:49.621 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70213", 00:08:49.621 "tpoint_group_mask": "0x8", 00:08:49.621 "iscsi_conn": { 00:08:49.621 "mask": "0x2", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "scsi": { 00:08:49.621 "mask": "0x4", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "bdev": { 00:08:49.621 "mask": "0x8", 00:08:49.621 "tpoint_mask": "0xffffffffffffffff" 00:08:49.621 }, 00:08:49.621 "nvmf_rdma": { 00:08:49.621 "mask": "0x10", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "nvmf_tcp": { 00:08:49.621 "mask": "0x20", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "ftl": { 00:08:49.621 "mask": "0x40", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "blobfs": { 00:08:49.621 "mask": "0x80", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "dsa": { 00:08:49.621 "mask": "0x200", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "thread": { 00:08:49.621 "mask": "0x400", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "nvme_pcie": { 00:08:49.621 "mask": "0x800", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "iaa": { 00:08:49.621 "mask": "0x1000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "nvme_tcp": { 00:08:49.621 "mask": "0x2000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "bdev_nvme": { 00:08:49.621 "mask": "0x4000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "sock": { 00:08:49.621 "mask": "0x8000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "blob": { 00:08:49.621 "mask": "0x10000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "bdev_raid": { 00:08:49.621 "mask": "0x20000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 }, 00:08:49.621 "scheduler": { 00:08:49.621 "mask": "0x40000", 00:08:49.621 "tpoint_mask": "0x0" 00:08:49.621 } 00:08:49.621 }' 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:49.621 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:49.880 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:49.881 00:08:49.881 real 0m0.287s 00:08:49.881 user 0m0.242s 00:08:49.881 sys 0m0.034s 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.881 15:35:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.881 ************************************ 00:08:49.881 END TEST rpc_trace_cmd_test 00:08:49.881 ************************************ 00:08:49.881 15:35:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:49.881 15:35:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:49.881 15:35:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:49.881 15:35:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.881 15:35:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.881 15:35:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.881 ************************************ 00:08:49.881 START TEST rpc_daemon_integrity 00:08:49.881 ************************************ 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.881 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:50.140 { 00:08:50.140 "name": "Malloc2", 00:08:50.140 "aliases": [ 00:08:50.140 "585da8ff-0a4d-4588-8108-32adbcdcd853" 00:08:50.140 ], 00:08:50.140 "product_name": "Malloc disk", 00:08:50.140 "block_size": 512, 00:08:50.140 "num_blocks": 16384, 00:08:50.140 "uuid": "585da8ff-0a4d-4588-8108-32adbcdcd853", 00:08:50.140 "assigned_rate_limits": { 00:08:50.140 "rw_ios_per_sec": 0, 00:08:50.140 "rw_mbytes_per_sec": 0, 00:08:50.140 "r_mbytes_per_sec": 0, 00:08:50.140 "w_mbytes_per_sec": 0 00:08:50.140 }, 00:08:50.140 "claimed": false, 00:08:50.140 "zoned": false, 00:08:50.140 "supported_io_types": { 00:08:50.140 "read": true, 00:08:50.140 "write": true, 00:08:50.140 "unmap": true, 00:08:50.140 "flush": true, 00:08:50.140 "reset": true, 00:08:50.140 "nvme_admin": false, 00:08:50.140 "nvme_io": false, 00:08:50.140 "nvme_io_md": false, 00:08:50.140 "write_zeroes": true, 00:08:50.140 "zcopy": true, 00:08:50.140 "get_zone_info": false, 00:08:50.140 "zone_management": false, 00:08:50.140 "zone_append": false, 00:08:50.140 "compare": false, 00:08:50.140 "compare_and_write": false, 00:08:50.140 "abort": true, 00:08:50.140 "seek_hole": false, 00:08:50.140 "seek_data": false, 00:08:50.140 "copy": true, 00:08:50.140 "nvme_iov_md": false 00:08:50.140 }, 00:08:50.140 "memory_domains": [ 00:08:50.140 { 00:08:50.140 "dma_device_id": "system", 00:08:50.140 "dma_device_type": 1 00:08:50.140 }, 00:08:50.140 { 00:08:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.140 "dma_device_type": 2 00:08:50.140 } 00:08:50.140 ], 00:08:50.140 "driver_specific": {} 00:08:50.140 } 00:08:50.140 ]' 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.140 [2024-12-06 15:35:38.639905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:50.140 [2024-12-06 15:35:38.640036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.140 [2024-12-06 15:35:38.640081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:50.140 [2024-12-06 15:35:38.640100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.140 [2024-12-06 15:35:38.643547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.140 [2024-12-06 15:35:38.643590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:50.140 Passthru0 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.140 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:50.140 { 00:08:50.140 "name": "Malloc2", 00:08:50.140 "aliases": [ 00:08:50.140 "585da8ff-0a4d-4588-8108-32adbcdcd853" 00:08:50.140 ], 00:08:50.140 "product_name": "Malloc disk", 00:08:50.140 "block_size": 512, 00:08:50.140 "num_blocks": 16384, 00:08:50.140 "uuid": "585da8ff-0a4d-4588-8108-32adbcdcd853", 00:08:50.140 "assigned_rate_limits": { 00:08:50.140 "rw_ios_per_sec": 0, 00:08:50.140 "rw_mbytes_per_sec": 0, 00:08:50.140 "r_mbytes_per_sec": 0, 00:08:50.140 "w_mbytes_per_sec": 0 00:08:50.140 }, 00:08:50.140 "claimed": true, 00:08:50.140 "claim_type": "exclusive_write", 00:08:50.140 "zoned": false, 00:08:50.140 "supported_io_types": { 00:08:50.140 "read": true, 00:08:50.140 "write": true, 00:08:50.140 "unmap": true, 00:08:50.140 "flush": true, 00:08:50.140 "reset": true, 00:08:50.140 "nvme_admin": false, 00:08:50.140 "nvme_io": false, 00:08:50.140 "nvme_io_md": false, 00:08:50.140 "write_zeroes": true, 00:08:50.140 "zcopy": true, 00:08:50.140 "get_zone_info": false, 00:08:50.140 "zone_management": false, 00:08:50.140 "zone_append": false, 00:08:50.140 "compare": false, 00:08:50.140 "compare_and_write": false, 00:08:50.140 "abort": true, 00:08:50.140 "seek_hole": false, 00:08:50.140 "seek_data": false, 00:08:50.140 "copy": true, 00:08:50.140 "nvme_iov_md": false 00:08:50.140 }, 00:08:50.140 "memory_domains": [ 00:08:50.140 { 00:08:50.140 "dma_device_id": "system", 00:08:50.140 "dma_device_type": 1 00:08:50.140 }, 00:08:50.140 { 00:08:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.140 "dma_device_type": 2 00:08:50.140 } 00:08:50.140 ], 00:08:50.140 "driver_specific": {} 00:08:50.140 }, 00:08:50.140 { 00:08:50.140 "name": "Passthru0", 00:08:50.140 "aliases": [ 00:08:50.140 "32c4a410-47d5-5222-8760-300d7c781892" 00:08:50.140 ], 00:08:50.140 "product_name": "passthru", 00:08:50.140 "block_size": 512, 00:08:50.140 "num_blocks": 16384, 00:08:50.140 "uuid": "32c4a410-47d5-5222-8760-300d7c781892", 00:08:50.140 "assigned_rate_limits": { 00:08:50.140 "rw_ios_per_sec": 0, 00:08:50.140 "rw_mbytes_per_sec": 0, 00:08:50.140 "r_mbytes_per_sec": 0, 00:08:50.140 "w_mbytes_per_sec": 0 00:08:50.141 }, 00:08:50.141 "claimed": false, 00:08:50.141 "zoned": false, 00:08:50.141 "supported_io_types": { 00:08:50.141 "read": true, 00:08:50.141 "write": true, 00:08:50.141 "unmap": true, 00:08:50.141 "flush": true, 00:08:50.141 "reset": true, 00:08:50.141 "nvme_admin": false, 00:08:50.141 "nvme_io": false, 00:08:50.141 "nvme_io_md": false, 00:08:50.141 "write_zeroes": true, 00:08:50.141 "zcopy": true, 00:08:50.141 "get_zone_info": false, 00:08:50.141 "zone_management": false, 00:08:50.141 "zone_append": false, 00:08:50.141 "compare": false, 00:08:50.141 "compare_and_write": false, 00:08:50.141 "abort": true, 00:08:50.141 "seek_hole": false, 00:08:50.141 "seek_data": false, 00:08:50.141 "copy": true, 00:08:50.141 "nvme_iov_md": false 00:08:50.141 }, 00:08:50.141 "memory_domains": [ 00:08:50.141 { 00:08:50.141 "dma_device_id": "system", 00:08:50.141 "dma_device_type": 1 00:08:50.141 }, 00:08:50.141 { 00:08:50.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.141 "dma_device_type": 2 00:08:50.141 } 00:08:50.141 ], 00:08:50.141 "driver_specific": { 00:08:50.141 "passthru": { 00:08:50.141 "name": "Passthru0", 00:08:50.141 "base_bdev_name": "Malloc2" 00:08:50.141 } 00:08:50.141 } 00:08:50.141 } 00:08:50.141 ]' 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:50.141 00:08:50.141 real 0m0.333s 00:08:50.141 user 0m0.216s 00:08:50.141 sys 0m0.046s 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.141 ************************************ 00:08:50.141 END TEST rpc_daemon_integrity 00:08:50.141 ************************************ 00:08:50.141 15:35:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.399 15:35:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:50.399 15:35:38 rpc -- rpc/rpc.sh@84 -- # killprocess 70213 00:08:50.399 15:35:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 70213 ']' 00:08:50.399 15:35:38 rpc -- common/autotest_common.sh@958 -- # kill -0 70213 00:08:50.399 15:35:38 rpc -- common/autotest_common.sh@959 -- # uname 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70213 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.400 killing process with pid 70213 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70213' 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@973 -- # kill 70213 00:08:50.400 15:35:38 rpc -- common/autotest_common.sh@978 -- # wait 70213 00:08:50.966 00:08:50.966 real 0m3.304s 00:08:50.966 user 0m4.017s 00:08:50.966 sys 0m0.943s 00:08:50.966 15:35:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.966 15:35:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.966 ************************************ 00:08:50.966 END TEST rpc 00:08:50.966 ************************************ 00:08:50.966 15:35:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:50.966 15:35:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.966 15:35:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.966 15:35:39 -- common/autotest_common.sh@10 -- # set +x 00:08:50.966 ************************************ 00:08:50.966 START TEST skip_rpc 00:08:50.966 ************************************ 00:08:50.966 15:35:39 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:50.966 * Looking for test storage... 00:08:50.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:50.966 15:35:39 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.966 15:35:39 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.966 15:35:39 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.225 15:35:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.225 --rc genhtml_branch_coverage=1 00:08:51.225 --rc genhtml_function_coverage=1 00:08:51.225 --rc genhtml_legend=1 00:08:51.225 --rc geninfo_all_blocks=1 00:08:51.225 --rc geninfo_unexecuted_blocks=1 00:08:51.225 00:08:51.225 ' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.225 --rc genhtml_branch_coverage=1 00:08:51.225 --rc genhtml_function_coverage=1 00:08:51.225 --rc genhtml_legend=1 00:08:51.225 --rc geninfo_all_blocks=1 00:08:51.225 --rc geninfo_unexecuted_blocks=1 00:08:51.225 00:08:51.225 ' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.225 --rc genhtml_branch_coverage=1 00:08:51.225 --rc genhtml_function_coverage=1 00:08:51.225 --rc genhtml_legend=1 00:08:51.225 --rc geninfo_all_blocks=1 00:08:51.225 --rc geninfo_unexecuted_blocks=1 00:08:51.225 00:08:51.225 ' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.225 --rc genhtml_branch_coverage=1 00:08:51.225 --rc genhtml_function_coverage=1 00:08:51.225 --rc genhtml_legend=1 00:08:51.225 --rc geninfo_all_blocks=1 00:08:51.225 --rc geninfo_unexecuted_blocks=1 00:08:51.225 00:08:51.225 ' 00:08:51.225 15:35:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:51.225 15:35:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:51.225 15:35:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.225 15:35:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.225 ************************************ 00:08:51.225 START TEST skip_rpc 00:08:51.225 ************************************ 00:08:51.225 15:35:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:51.225 15:35:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70424 00:08:51.225 15:35:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:51.225 15:35:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.225 15:35:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:51.225 [2024-12-06 15:35:39.889552] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:51.225 [2024-12-06 15:35:39.889793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70424 ] 00:08:51.484 [2024-12-06 15:35:40.056990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.484 [2024-12-06 15:35:40.125624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.748 15:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:56.748 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:56.748 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70424 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70424 ']' 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70424 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70424 00:08:56.749 killing process with pid 70424 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70424' 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70424 00:08:56.749 15:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70424 00:08:56.749 00:08:56.749 real 0m5.622s 00:08:56.749 user 0m5.075s 00:08:56.749 sys 0m0.457s 00:08:56.749 15:35:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.749 15:35:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.749 ************************************ 00:08:56.749 END TEST skip_rpc 00:08:56.749 ************************************ 00:08:56.749 15:35:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:56.749 15:35:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.749 15:35:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.749 15:35:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.749 ************************************ 00:08:56.749 START TEST skip_rpc_with_json 00:08:56.749 ************************************ 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70513 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70513 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 70513 ']' 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.749 15:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:57.006 [2024-12-06 15:35:45.557082] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:57.006 [2024-12-06 15:35:45.557318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70513 ] 00:08:57.263 [2024-12-06 15:35:45.718563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.263 [2024-12-06 15:35:45.768893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:57.828 [2024-12-06 15:35:46.506557] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:57.828 request: 00:08:57.828 { 00:08:57.828 "trtype": "tcp", 00:08:57.828 "method": "nvmf_get_transports", 00:08:57.828 "req_id": 1 00:08:57.828 } 00:08:57.828 Got JSON-RPC error response 00:08:57.828 response: 00:08:57.828 { 00:08:57.828 "code": -19, 00:08:57.828 "message": "No such device" 00:08:57.828 } 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.828 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:57.828 [2024-12-06 15:35:46.518704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.093 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.093 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:58.093 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.093 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.094 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:58.094 { 00:08:58.094 "subsystems": [ 00:08:58.094 { 00:08:58.094 "subsystem": "fsdev", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "fsdev_set_opts", 00:08:58.094 "params": { 00:08:58.094 "fsdev_io_pool_size": 65535, 00:08:58.094 "fsdev_io_cache_size": 256 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "keyring", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "iobuf", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "iobuf_set_options", 00:08:58.094 "params": { 00:08:58.094 "small_pool_count": 8192, 00:08:58.094 "large_pool_count": 1024, 00:08:58.094 "small_bufsize": 8192, 00:08:58.094 "large_bufsize": 135168, 00:08:58.094 "enable_numa": false 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "sock", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "sock_set_default_impl", 00:08:58.094 "params": { 00:08:58.094 "impl_name": "posix" 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "sock_impl_set_options", 00:08:58.094 "params": { 00:08:58.094 "impl_name": "ssl", 00:08:58.094 "recv_buf_size": 4096, 00:08:58.094 "send_buf_size": 4096, 00:08:58.094 "enable_recv_pipe": true, 00:08:58.094 "enable_quickack": false, 00:08:58.094 "enable_placement_id": 0, 00:08:58.094 "enable_zerocopy_send_server": true, 00:08:58.094 "enable_zerocopy_send_client": false, 00:08:58.094 "zerocopy_threshold": 0, 00:08:58.094 "tls_version": 0, 00:08:58.094 "enable_ktls": false 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "sock_impl_set_options", 00:08:58.094 "params": { 00:08:58.094 "impl_name": "posix", 00:08:58.094 "recv_buf_size": 2097152, 00:08:58.094 "send_buf_size": 2097152, 00:08:58.094 "enable_recv_pipe": true, 00:08:58.094 "enable_quickack": false, 00:08:58.094 "enable_placement_id": 0, 00:08:58.094 "enable_zerocopy_send_server": true, 00:08:58.094 "enable_zerocopy_send_client": false, 00:08:58.094 "zerocopy_threshold": 0, 00:08:58.094 "tls_version": 0, 00:08:58.094 "enable_ktls": false 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "vmd", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "accel", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "accel_set_options", 00:08:58.094 "params": { 00:08:58.094 "small_cache_size": 128, 00:08:58.094 "large_cache_size": 16, 00:08:58.094 "task_count": 2048, 00:08:58.094 "sequence_count": 2048, 00:08:58.094 "buf_count": 2048 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "bdev", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "bdev_set_options", 00:08:58.094 "params": { 00:08:58.094 "bdev_io_pool_size": 65535, 00:08:58.094 "bdev_io_cache_size": 256, 00:08:58.094 "bdev_auto_examine": true, 00:08:58.094 "iobuf_small_cache_size": 128, 00:08:58.094 "iobuf_large_cache_size": 16 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "bdev_raid_set_options", 00:08:58.094 "params": { 00:08:58.094 "process_window_size_kb": 1024, 00:08:58.094 "process_max_bandwidth_mb_sec": 0 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "bdev_iscsi_set_options", 00:08:58.094 "params": { 00:08:58.094 "timeout_sec": 30 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "bdev_nvme_set_options", 00:08:58.094 "params": { 00:08:58.094 "action_on_timeout": "none", 00:08:58.094 "timeout_us": 0, 00:08:58.094 "timeout_admin_us": 0, 00:08:58.094 "keep_alive_timeout_ms": 10000, 00:08:58.094 "arbitration_burst": 0, 00:08:58.094 "low_priority_weight": 0, 00:08:58.094 "medium_priority_weight": 0, 00:08:58.094 "high_priority_weight": 0, 00:08:58.094 "nvme_adminq_poll_period_us": 10000, 00:08:58.094 "nvme_ioq_poll_period_us": 0, 00:08:58.094 "io_queue_requests": 0, 00:08:58.094 "delay_cmd_submit": true, 00:08:58.094 "transport_retry_count": 4, 00:08:58.094 "bdev_retry_count": 3, 00:08:58.094 "transport_ack_timeout": 0, 00:08:58.094 "ctrlr_loss_timeout_sec": 0, 00:08:58.094 "reconnect_delay_sec": 0, 00:08:58.094 "fast_io_fail_timeout_sec": 0, 00:08:58.094 "disable_auto_failback": false, 00:08:58.094 "generate_uuids": false, 00:08:58.094 "transport_tos": 0, 00:08:58.094 "nvme_error_stat": false, 00:08:58.094 "rdma_srq_size": 0, 00:08:58.094 "io_path_stat": false, 00:08:58.094 "allow_accel_sequence": false, 00:08:58.094 "rdma_max_cq_size": 0, 00:08:58.094 "rdma_cm_event_timeout_ms": 0, 00:08:58.094 "dhchap_digests": [ 00:08:58.094 "sha256", 00:08:58.094 "sha384", 00:08:58.094 "sha512" 00:08:58.094 ], 00:08:58.094 "dhchap_dhgroups": [ 00:08:58.094 "null", 00:08:58.094 "ffdhe2048", 00:08:58.094 "ffdhe3072", 00:08:58.094 "ffdhe4096", 00:08:58.094 "ffdhe6144", 00:08:58.094 "ffdhe8192" 00:08:58.094 ] 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "bdev_nvme_set_hotplug", 00:08:58.094 "params": { 00:08:58.094 "period_us": 100000, 00:08:58.094 "enable": false 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "bdev_wait_for_examine" 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "scsi", 00:08:58.094 "config": null 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "scheduler", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "framework_set_scheduler", 00:08:58.094 "params": { 00:08:58.094 "name": "static" 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "vhost_scsi", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "vhost_blk", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "ublk", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "nbd", 00:08:58.094 "config": [] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "nvmf", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "nvmf_set_config", 00:08:58.094 "params": { 00:08:58.094 "discovery_filter": "match_any", 00:08:58.094 "admin_cmd_passthru": { 00:08:58.094 "identify_ctrlr": false 00:08:58.094 }, 00:08:58.094 "dhchap_digests": [ 00:08:58.094 "sha256", 00:08:58.094 "sha384", 00:08:58.094 "sha512" 00:08:58.094 ], 00:08:58.094 "dhchap_dhgroups": [ 00:08:58.094 "null", 00:08:58.094 "ffdhe2048", 00:08:58.094 "ffdhe3072", 00:08:58.094 "ffdhe4096", 00:08:58.094 "ffdhe6144", 00:08:58.094 "ffdhe8192" 00:08:58.094 ] 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "nvmf_set_max_subsystems", 00:08:58.094 "params": { 00:08:58.094 "max_subsystems": 1024 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "nvmf_set_crdt", 00:08:58.094 "params": { 00:08:58.094 "crdt1": 0, 00:08:58.094 "crdt2": 0, 00:08:58.094 "crdt3": 0 00:08:58.094 } 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "method": "nvmf_create_transport", 00:08:58.094 "params": { 00:08:58.094 "trtype": "TCP", 00:08:58.094 "max_queue_depth": 128, 00:08:58.094 "max_io_qpairs_per_ctrlr": 127, 00:08:58.094 "in_capsule_data_size": 4096, 00:08:58.094 "max_io_size": 131072, 00:08:58.094 "io_unit_size": 131072, 00:08:58.094 "max_aq_depth": 128, 00:08:58.094 "num_shared_buffers": 511, 00:08:58.094 "buf_cache_size": 4294967295, 00:08:58.094 "dif_insert_or_strip": false, 00:08:58.094 "zcopy": false, 00:08:58.094 "c2h_success": true, 00:08:58.094 "sock_priority": 0, 00:08:58.094 "abort_timeout_sec": 1, 00:08:58.094 "ack_timeout": 0, 00:08:58.094 "data_wr_pool_size": 0 00:08:58.094 } 00:08:58.094 } 00:08:58.094 ] 00:08:58.094 }, 00:08:58.094 { 00:08:58.094 "subsystem": "iscsi", 00:08:58.094 "config": [ 00:08:58.094 { 00:08:58.094 "method": "iscsi_set_options", 00:08:58.094 "params": { 00:08:58.094 "node_base": "iqn.2016-06.io.spdk", 00:08:58.094 "max_sessions": 128, 00:08:58.094 "max_connections_per_session": 2, 00:08:58.094 "max_queue_depth": 64, 00:08:58.094 "default_time2wait": 2, 00:08:58.094 "default_time2retain": 20, 00:08:58.094 "first_burst_length": 8192, 00:08:58.094 "immediate_data": true, 00:08:58.094 "allow_duplicated_isid": false, 00:08:58.094 "error_recovery_level": 0, 00:08:58.095 "nop_timeout": 60, 00:08:58.095 "nop_in_interval": 30, 00:08:58.095 "disable_chap": false, 00:08:58.095 "require_chap": false, 00:08:58.095 "mutual_chap": false, 00:08:58.095 "chap_group": 0, 00:08:58.095 "max_large_datain_per_connection": 64, 00:08:58.095 "max_r2t_per_connection": 4, 00:08:58.095 "pdu_pool_size": 36864, 00:08:58.095 "immediate_data_pool_size": 16384, 00:08:58.095 "data_out_pool_size": 2048 00:08:58.095 } 00:08:58.095 } 00:08:58.095 ] 00:08:58.095 } 00:08:58.095 ] 00:08:58.095 } 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70513 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70513 ']' 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70513 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70513 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.095 killing process with pid 70513 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70513' 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70513 00:08:58.095 15:35:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70513 00:08:58.662 15:35:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70541 00:08:58.662 15:35:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:58.662 15:35:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70541 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70541 ']' 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70541 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70541 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.930 killing process with pid 70541 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70541' 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70541 00:09:03.930 15:35:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70541 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:04.496 00:09:04.496 real 0m7.589s 00:09:04.496 user 0m6.957s 00:09:04.496 sys 0m1.057s 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 ************************************ 00:09:04.496 END TEST skip_rpc_with_json 00:09:04.496 ************************************ 00:09:04.496 15:35:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:04.496 15:35:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.496 15:35:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.496 15:35:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 ************************************ 00:09:04.496 START TEST skip_rpc_with_delay 00:09:04.496 ************************************ 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:04.496 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:04.755 [2024-12-06 15:35:53.222462] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.755 00:09:04.755 real 0m0.221s 00:09:04.755 user 0m0.115s 00:09:04.755 sys 0m0.100s 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.755 15:35:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:04.755 ************************************ 00:09:04.755 END TEST skip_rpc_with_delay 00:09:04.755 ************************************ 00:09:04.755 15:35:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:04.755 15:35:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:04.755 15:35:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:04.755 15:35:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.755 15:35:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.755 15:35:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.755 ************************************ 00:09:04.755 START TEST exit_on_failed_rpc_init 00:09:04.755 ************************************ 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70660 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70660 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 70660 ']' 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.755 15:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:05.013 [2024-12-06 15:35:53.493648] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:05.013 [2024-12-06 15:35:53.493901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70660 ] 00:09:05.013 [2024-12-06 15:35:53.654336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.271 [2024-12-06 15:35:53.726735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:05.838 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:06.097 [2024-12-06 15:35:54.623229] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:06.097 [2024-12-06 15:35:54.623960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70678 ] 00:09:06.355 [2024-12-06 15:35:54.789791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.355 [2024-12-06 15:35:54.839556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.355 [2024-12-06 15:35:54.839705] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:06.355 [2024-12-06 15:35:54.839754] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:06.355 [2024-12-06 15:35:54.839783] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70660 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 70660 ']' 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 70660 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70660 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70660' 00:09:06.355 killing process with pid 70660 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 70660 00:09:06.355 15:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 70660 00:09:06.920 00:09:06.921 real 0m2.125s 00:09:06.921 user 0m2.304s 00:09:06.921 sys 0m0.693s 00:09:06.921 15:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.921 15:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:06.921 ************************************ 00:09:06.921 END TEST exit_on_failed_rpc_init 00:09:06.921 ************************************ 00:09:06.921 15:35:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:06.921 00:09:06.921 real 0m15.984s 00:09:06.921 user 0m14.638s 00:09:06.921 sys 0m2.517s 00:09:06.921 15:35:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.921 15:35:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.921 ************************************ 00:09:06.921 END TEST skip_rpc 00:09:06.921 ************************************ 00:09:06.921 15:35:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:06.921 15:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.921 15:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.921 15:35:55 -- common/autotest_common.sh@10 -- # set +x 00:09:06.921 ************************************ 00:09:06.921 START TEST rpc_client 00:09:06.921 ************************************ 00:09:06.921 15:35:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:07.179 * Looking for test storage... 00:09:07.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.179 15:35:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.179 --rc genhtml_branch_coverage=1 00:09:07.179 --rc genhtml_function_coverage=1 00:09:07.179 --rc genhtml_legend=1 00:09:07.179 --rc geninfo_all_blocks=1 00:09:07.179 --rc geninfo_unexecuted_blocks=1 00:09:07.179 00:09:07.179 ' 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.179 --rc genhtml_branch_coverage=1 00:09:07.179 --rc genhtml_function_coverage=1 00:09:07.179 --rc genhtml_legend=1 00:09:07.179 --rc geninfo_all_blocks=1 00:09:07.179 --rc geninfo_unexecuted_blocks=1 00:09:07.179 00:09:07.179 ' 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.179 --rc genhtml_branch_coverage=1 00:09:07.179 --rc genhtml_function_coverage=1 00:09:07.179 --rc genhtml_legend=1 00:09:07.179 --rc geninfo_all_blocks=1 00:09:07.179 --rc geninfo_unexecuted_blocks=1 00:09:07.179 00:09:07.179 ' 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.179 --rc genhtml_branch_coverage=1 00:09:07.179 --rc genhtml_function_coverage=1 00:09:07.179 --rc genhtml_legend=1 00:09:07.179 --rc geninfo_all_blocks=1 00:09:07.179 --rc geninfo_unexecuted_blocks=1 00:09:07.179 00:09:07.179 ' 00:09:07.179 15:35:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:07.179 OK 00:09:07.179 15:35:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:07.179 00:09:07.179 real 0m0.240s 00:09:07.179 user 0m0.140s 00:09:07.179 sys 0m0.110s 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.179 ************************************ 00:09:07.179 END TEST rpc_client 00:09:07.179 15:35:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:07.179 ************************************ 00:09:07.179 15:35:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:07.179 15:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.179 15:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.179 15:35:55 -- common/autotest_common.sh@10 -- # set +x 00:09:07.439 ************************************ 00:09:07.439 START TEST json_config 00:09:07.439 ************************************ 00:09:07.439 15:35:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:07.439 15:35:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.439 15:35:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.439 15:35:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.439 15:35:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.439 15:35:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.439 15:35:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.439 15:35:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.439 15:35:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.439 15:35:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:07.439 15:35:56 json_config -- scripts/common.sh@345 -- # : 1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.439 15:35:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.439 15:35:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@353 -- # local d=1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.439 15:35:56 json_config -- scripts/common.sh@355 -- # echo 1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.439 15:35:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@353 -- # local d=2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.439 15:35:56 json_config -- scripts/common.sh@355 -- # echo 2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.439 15:35:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.439 15:35:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.439 15:35:56 json_config -- scripts/common.sh@368 -- # return 0 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.439 --rc genhtml_branch_coverage=1 00:09:07.439 --rc genhtml_function_coverage=1 00:09:07.439 --rc genhtml_legend=1 00:09:07.439 --rc geninfo_all_blocks=1 00:09:07.439 --rc geninfo_unexecuted_blocks=1 00:09:07.439 00:09:07.439 ' 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.439 --rc genhtml_branch_coverage=1 00:09:07.439 --rc genhtml_function_coverage=1 00:09:07.439 --rc genhtml_legend=1 00:09:07.439 --rc geninfo_all_blocks=1 00:09:07.439 --rc geninfo_unexecuted_blocks=1 00:09:07.439 00:09:07.439 ' 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.439 --rc genhtml_branch_coverage=1 00:09:07.439 --rc genhtml_function_coverage=1 00:09:07.439 --rc genhtml_legend=1 00:09:07.439 --rc geninfo_all_blocks=1 00:09:07.439 --rc geninfo_unexecuted_blocks=1 00:09:07.439 00:09:07.439 ' 00:09:07.439 15:35:56 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.439 --rc genhtml_branch_coverage=1 00:09:07.439 --rc genhtml_function_coverage=1 00:09:07.439 --rc genhtml_legend=1 00:09:07.439 --rc geninfo_all_blocks=1 00:09:07.439 --rc geninfo_unexecuted_blocks=1 00:09:07.439 00:09:07.439 ' 00:09:07.439 15:35:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.439 15:35:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8803dd04-8b7b-4aef-9a54-2657a611621c 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8803dd04-8b7b-4aef-9a54-2657a611621c 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.440 15:35:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.440 15:35:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.440 15:35:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.440 15:35:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.440 15:35:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.440 15:35:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.440 15:35:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.440 15:35:56 json_config -- paths/export.sh@5 -- # export PATH 00:09:07.440 15:35:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@51 -- # : 0 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.440 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.440 15:35:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:07.440 WARNING: No tests are enabled so not running JSON configuration tests 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:07.440 15:35:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:07.440 00:09:07.440 real 0m0.194s 00:09:07.440 user 0m0.123s 00:09:07.440 sys 0m0.078s 00:09:07.440 15:35:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.440 15:35:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:07.440 ************************************ 00:09:07.440 END TEST json_config 00:09:07.440 ************************************ 00:09:07.440 15:35:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:07.440 15:35:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.440 15:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.440 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:07.440 ************************************ 00:09:07.440 START TEST json_config_extra_key 00:09:07.440 ************************************ 00:09:07.440 15:35:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:07.699 15:35:56 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.699 15:35:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.699 15:35:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.699 15:35:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.699 15:35:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:07.700 15:35:56 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.700 15:35:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.700 --rc genhtml_branch_coverage=1 00:09:07.700 --rc genhtml_function_coverage=1 00:09:07.700 --rc genhtml_legend=1 00:09:07.700 --rc geninfo_all_blocks=1 00:09:07.700 --rc geninfo_unexecuted_blocks=1 00:09:07.700 00:09:07.700 ' 00:09:07.700 15:35:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.700 --rc genhtml_branch_coverage=1 00:09:07.700 --rc genhtml_function_coverage=1 00:09:07.700 --rc genhtml_legend=1 00:09:07.700 --rc geninfo_all_blocks=1 00:09:07.700 --rc geninfo_unexecuted_blocks=1 00:09:07.700 00:09:07.700 ' 00:09:07.700 15:35:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.700 --rc genhtml_branch_coverage=1 00:09:07.700 --rc genhtml_function_coverage=1 00:09:07.700 --rc genhtml_legend=1 00:09:07.700 --rc geninfo_all_blocks=1 00:09:07.700 --rc geninfo_unexecuted_blocks=1 00:09:07.700 00:09:07.700 ' 00:09:07.700 15:35:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.700 --rc genhtml_branch_coverage=1 00:09:07.700 --rc genhtml_function_coverage=1 00:09:07.700 --rc genhtml_legend=1 00:09:07.700 --rc geninfo_all_blocks=1 00:09:07.700 --rc geninfo_unexecuted_blocks=1 00:09:07.700 00:09:07.700 ' 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8803dd04-8b7b-4aef-9a54-2657a611621c 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8803dd04-8b7b-4aef-9a54-2657a611621c 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.700 15:35:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.700 15:35:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.700 15:35:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.700 15:35:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.700 15:35:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:07.700 15:35:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.700 15:35:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:07.700 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:07.701 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:07.701 INFO: launching applications... 00:09:07.701 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:07.701 15:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70862 00:09:07.701 Waiting for target to run... 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70862 /var/tmp/spdk_tgt.sock 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70862 ']' 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:07.701 15:35:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.701 15:35:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:07.959 [2024-12-06 15:35:56.450986] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:07.959 [2024-12-06 15:35:56.451157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:09:08.559 [2024-12-06 15:35:57.004022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.559 [2024-12-06 15:35:57.058449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.816 15:35:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.816 15:35:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:08.816 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:08.816 INFO: shutting down applications... 00:09:08.816 15:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:08.816 15:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70862 ]] 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70862 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70862 00:09:08.816 15:35:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:09.381 15:35:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:09.381 15:35:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:09.381 15:35:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70862 00:09:09.381 15:35:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70862 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:09.945 SPDK target shutdown done 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:09.945 15:35:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:09.945 Success 00:09:09.945 15:35:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:09.945 00:09:09.945 real 0m2.293s 00:09:09.945 user 0m1.692s 00:09:09.945 sys 0m0.668s 00:09:09.945 ************************************ 00:09:09.945 END TEST json_config_extra_key 00:09:09.945 ************************************ 00:09:09.945 15:35:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.945 15:35:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 15:35:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:09.945 15:35:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.945 15:35:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.945 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 ************************************ 00:09:09.945 START TEST alias_rpc 00:09:09.945 ************************************ 00:09:09.945 15:35:58 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:09.945 * Looking for test storage... 00:09:09.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:09.945 15:35:58 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.945 15:35:58 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.945 15:35:58 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.204 15:35:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.204 --rc genhtml_branch_coverage=1 00:09:10.204 --rc genhtml_function_coverage=1 00:09:10.204 --rc genhtml_legend=1 00:09:10.204 --rc geninfo_all_blocks=1 00:09:10.204 --rc geninfo_unexecuted_blocks=1 00:09:10.204 00:09:10.204 ' 00:09:10.204 15:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:10.204 15:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70947 00:09:10.204 15:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70947 00:09:10.204 15:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70947 ']' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.204 15:35:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.204 [2024-12-06 15:35:58.790357] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:10.204 [2024-12-06 15:35:58.790551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:09:10.479 [2024-12-06 15:35:58.951870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.479 [2024-12-06 15:35:59.022308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.045 15:35:59 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.045 15:35:59 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:11.045 15:35:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:11.304 15:35:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70947 00:09:11.304 15:35:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70947 ']' 00:09:11.304 15:35:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70947 00:09:11.304 15:35:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:11.304 15:35:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.304 15:35:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70947 00:09:11.565 killing process with pid 70947 00:09:11.565 15:36:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.565 15:36:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.565 15:36:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70947' 00:09:11.565 15:36:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 70947 00:09:11.565 15:36:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 70947 00:09:12.134 ************************************ 00:09:12.134 END TEST alias_rpc 00:09:12.134 ************************************ 00:09:12.134 00:09:12.134 real 0m2.160s 00:09:12.134 user 0m2.226s 00:09:12.134 sys 0m0.634s 00:09:12.134 15:36:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.134 15:36:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.134 15:36:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:12.134 15:36:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:12.134 15:36:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.134 15:36:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.134 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:09:12.134 ************************************ 00:09:12.134 START TEST spdkcli_tcp 00:09:12.134 ************************************ 00:09:12.134 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:12.134 * Looking for test storage... 00:09:12.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:12.134 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.134 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.134 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.393 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:12.393 15:36:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.394 15:36:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.394 15:36:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.394 15:36:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.394 --rc genhtml_branch_coverage=1 00:09:12.394 --rc genhtml_function_coverage=1 00:09:12.394 --rc genhtml_legend=1 00:09:12.394 --rc geninfo_all_blocks=1 00:09:12.394 --rc geninfo_unexecuted_blocks=1 00:09:12.394 00:09:12.394 ' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.394 --rc genhtml_branch_coverage=1 00:09:12.394 --rc genhtml_function_coverage=1 00:09:12.394 --rc genhtml_legend=1 00:09:12.394 --rc geninfo_all_blocks=1 00:09:12.394 --rc geninfo_unexecuted_blocks=1 00:09:12.394 00:09:12.394 ' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.394 --rc genhtml_branch_coverage=1 00:09:12.394 --rc genhtml_function_coverage=1 00:09:12.394 --rc genhtml_legend=1 00:09:12.394 --rc geninfo_all_blocks=1 00:09:12.394 --rc geninfo_unexecuted_blocks=1 00:09:12.394 00:09:12.394 ' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.394 --rc genhtml_branch_coverage=1 00:09:12.394 --rc genhtml_function_coverage=1 00:09:12.394 --rc genhtml_legend=1 00:09:12.394 --rc geninfo_all_blocks=1 00:09:12.394 --rc geninfo_unexecuted_blocks=1 00:09:12.394 00:09:12.394 ' 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71038 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71038 00:09:12.394 15:36:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71038 ']' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.394 15:36:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.394 [2024-12-06 15:36:01.004725] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:12.394 [2024-12-06 15:36:01.005368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:09:12.653 [2024-12-06 15:36:01.164740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.653 [2024-12-06 15:36:01.225500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.653 [2024-12-06 15:36:01.225568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.589 15:36:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.589 15:36:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:13.589 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71055 00:09:13.589 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:13.589 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:13.589 [ 00:09:13.589 "bdev_malloc_delete", 00:09:13.589 "bdev_malloc_create", 00:09:13.589 "bdev_null_resize", 00:09:13.589 "bdev_null_delete", 00:09:13.589 "bdev_null_create", 00:09:13.589 "bdev_nvme_cuse_unregister", 00:09:13.589 "bdev_nvme_cuse_register", 00:09:13.589 "bdev_opal_new_user", 00:09:13.589 "bdev_opal_set_lock_state", 00:09:13.589 "bdev_opal_delete", 00:09:13.589 "bdev_opal_get_info", 00:09:13.589 "bdev_opal_create", 00:09:13.589 "bdev_nvme_opal_revert", 00:09:13.589 "bdev_nvme_opal_init", 00:09:13.589 "bdev_nvme_send_cmd", 00:09:13.589 "bdev_nvme_set_keys", 00:09:13.589 "bdev_nvme_get_path_iostat", 00:09:13.589 "bdev_nvme_get_mdns_discovery_info", 00:09:13.589 "bdev_nvme_stop_mdns_discovery", 00:09:13.589 "bdev_nvme_start_mdns_discovery", 00:09:13.589 "bdev_nvme_set_multipath_policy", 00:09:13.589 "bdev_nvme_set_preferred_path", 00:09:13.589 "bdev_nvme_get_io_paths", 00:09:13.589 "bdev_nvme_remove_error_injection", 00:09:13.589 "bdev_nvme_add_error_injection", 00:09:13.589 "bdev_nvme_get_discovery_info", 00:09:13.589 "bdev_nvme_stop_discovery", 00:09:13.589 "bdev_nvme_start_discovery", 00:09:13.589 "bdev_nvme_get_controller_health_info", 00:09:13.589 "bdev_nvme_disable_controller", 00:09:13.589 "bdev_nvme_enable_controller", 00:09:13.589 "bdev_nvme_reset_controller", 00:09:13.589 "bdev_nvme_get_transport_statistics", 00:09:13.589 "bdev_nvme_apply_firmware", 00:09:13.589 "bdev_nvme_detach_controller", 00:09:13.589 "bdev_nvme_get_controllers", 00:09:13.589 "bdev_nvme_attach_controller", 00:09:13.589 "bdev_nvme_set_hotplug", 00:09:13.589 "bdev_nvme_set_options", 00:09:13.589 "bdev_passthru_delete", 00:09:13.589 "bdev_passthru_create", 00:09:13.589 "bdev_lvol_set_parent_bdev", 00:09:13.589 "bdev_lvol_set_parent", 00:09:13.589 "bdev_lvol_check_shallow_copy", 00:09:13.589 "bdev_lvol_start_shallow_copy", 00:09:13.589 "bdev_lvol_grow_lvstore", 00:09:13.589 "bdev_lvol_get_lvols", 00:09:13.589 "bdev_lvol_get_lvstores", 00:09:13.589 "bdev_lvol_delete", 00:09:13.589 "bdev_lvol_set_read_only", 00:09:13.589 "bdev_lvol_resize", 00:09:13.589 "bdev_lvol_decouple_parent", 00:09:13.589 "bdev_lvol_inflate", 00:09:13.589 "bdev_lvol_rename", 00:09:13.589 "bdev_lvol_clone_bdev", 00:09:13.589 "bdev_lvol_clone", 00:09:13.589 "bdev_lvol_snapshot", 00:09:13.589 "bdev_lvol_create", 00:09:13.589 "bdev_lvol_delete_lvstore", 00:09:13.589 "bdev_lvol_rename_lvstore", 00:09:13.589 "bdev_lvol_create_lvstore", 00:09:13.589 "bdev_raid_set_options", 00:09:13.589 "bdev_raid_remove_base_bdev", 00:09:13.589 "bdev_raid_add_base_bdev", 00:09:13.589 "bdev_raid_delete", 00:09:13.589 "bdev_raid_create", 00:09:13.589 "bdev_raid_get_bdevs", 00:09:13.589 "bdev_error_inject_error", 00:09:13.589 "bdev_error_delete", 00:09:13.589 "bdev_error_create", 00:09:13.589 "bdev_split_delete", 00:09:13.589 "bdev_split_create", 00:09:13.589 "bdev_delay_delete", 00:09:13.589 "bdev_delay_create", 00:09:13.589 "bdev_delay_update_latency", 00:09:13.589 "bdev_zone_block_delete", 00:09:13.589 "bdev_zone_block_create", 00:09:13.589 "blobfs_create", 00:09:13.589 "blobfs_detect", 00:09:13.590 "blobfs_set_cache_size", 00:09:13.590 "bdev_xnvme_delete", 00:09:13.590 "bdev_xnvme_create", 00:09:13.590 "bdev_aio_delete", 00:09:13.590 "bdev_aio_rescan", 00:09:13.590 "bdev_aio_create", 00:09:13.590 "bdev_ftl_set_property", 00:09:13.590 "bdev_ftl_get_properties", 00:09:13.590 "bdev_ftl_get_stats", 00:09:13.590 "bdev_ftl_unmap", 00:09:13.590 "bdev_ftl_unload", 00:09:13.590 "bdev_ftl_delete", 00:09:13.590 "bdev_ftl_load", 00:09:13.590 "bdev_ftl_create", 00:09:13.590 "bdev_virtio_attach_controller", 00:09:13.590 "bdev_virtio_scsi_get_devices", 00:09:13.590 "bdev_virtio_detach_controller", 00:09:13.590 "bdev_virtio_blk_set_hotplug", 00:09:13.590 "bdev_iscsi_delete", 00:09:13.590 "bdev_iscsi_create", 00:09:13.590 "bdev_iscsi_set_options", 00:09:13.590 "accel_error_inject_error", 00:09:13.590 "ioat_scan_accel_module", 00:09:13.590 "dsa_scan_accel_module", 00:09:13.590 "iaa_scan_accel_module", 00:09:13.590 "keyring_file_remove_key", 00:09:13.590 "keyring_file_add_key", 00:09:13.590 "keyring_linux_set_options", 00:09:13.590 "fsdev_aio_delete", 00:09:13.590 "fsdev_aio_create", 00:09:13.590 "iscsi_get_histogram", 00:09:13.590 "iscsi_enable_histogram", 00:09:13.590 "iscsi_set_options", 00:09:13.590 "iscsi_get_auth_groups", 00:09:13.590 "iscsi_auth_group_remove_secret", 00:09:13.590 "iscsi_auth_group_add_secret", 00:09:13.590 "iscsi_delete_auth_group", 00:09:13.590 "iscsi_create_auth_group", 00:09:13.590 "iscsi_set_discovery_auth", 00:09:13.590 "iscsi_get_options", 00:09:13.590 "iscsi_target_node_request_logout", 00:09:13.590 "iscsi_target_node_set_redirect", 00:09:13.590 "iscsi_target_node_set_auth", 00:09:13.590 "iscsi_target_node_add_lun", 00:09:13.590 "iscsi_get_stats", 00:09:13.590 "iscsi_get_connections", 00:09:13.590 "iscsi_portal_group_set_auth", 00:09:13.590 "iscsi_start_portal_group", 00:09:13.590 "iscsi_delete_portal_group", 00:09:13.590 "iscsi_create_portal_group", 00:09:13.590 "iscsi_get_portal_groups", 00:09:13.590 "iscsi_delete_target_node", 00:09:13.590 "iscsi_target_node_remove_pg_ig_maps", 00:09:13.590 "iscsi_target_node_add_pg_ig_maps", 00:09:13.590 "iscsi_create_target_node", 00:09:13.590 "iscsi_get_target_nodes", 00:09:13.590 "iscsi_delete_initiator_group", 00:09:13.590 "iscsi_initiator_group_remove_initiators", 00:09:13.590 "iscsi_initiator_group_add_initiators", 00:09:13.590 "iscsi_create_initiator_group", 00:09:13.590 "iscsi_get_initiator_groups", 00:09:13.590 "nvmf_set_crdt", 00:09:13.590 "nvmf_set_config", 00:09:13.590 "nvmf_set_max_subsystems", 00:09:13.590 "nvmf_stop_mdns_prr", 00:09:13.590 "nvmf_publish_mdns_prr", 00:09:13.590 "nvmf_subsystem_get_listeners", 00:09:13.590 "nvmf_subsystem_get_qpairs", 00:09:13.590 "nvmf_subsystem_get_controllers", 00:09:13.590 "nvmf_get_stats", 00:09:13.590 "nvmf_get_transports", 00:09:13.590 "nvmf_create_transport", 00:09:13.590 "nvmf_get_targets", 00:09:13.590 "nvmf_delete_target", 00:09:13.590 "nvmf_create_target", 00:09:13.590 "nvmf_subsystem_allow_any_host", 00:09:13.590 "nvmf_subsystem_set_keys", 00:09:13.590 "nvmf_subsystem_remove_host", 00:09:13.590 "nvmf_subsystem_add_host", 00:09:13.590 "nvmf_ns_remove_host", 00:09:13.590 "nvmf_ns_add_host", 00:09:13.590 "nvmf_subsystem_remove_ns", 00:09:13.590 "nvmf_subsystem_set_ns_ana_group", 00:09:13.590 "nvmf_subsystem_add_ns", 00:09:13.590 "nvmf_subsystem_listener_set_ana_state", 00:09:13.590 "nvmf_discovery_get_referrals", 00:09:13.590 "nvmf_discovery_remove_referral", 00:09:13.590 "nvmf_discovery_add_referral", 00:09:13.590 "nvmf_subsystem_remove_listener", 00:09:13.590 "nvmf_subsystem_add_listener", 00:09:13.590 "nvmf_delete_subsystem", 00:09:13.590 "nvmf_create_subsystem", 00:09:13.590 "nvmf_get_subsystems", 00:09:13.590 "env_dpdk_get_mem_stats", 00:09:13.590 "nbd_get_disks", 00:09:13.590 "nbd_stop_disk", 00:09:13.590 "nbd_start_disk", 00:09:13.590 "ublk_recover_disk", 00:09:13.590 "ublk_get_disks", 00:09:13.590 "ublk_stop_disk", 00:09:13.590 "ublk_start_disk", 00:09:13.590 "ublk_destroy_target", 00:09:13.590 "ublk_create_target", 00:09:13.590 "virtio_blk_create_transport", 00:09:13.590 "virtio_blk_get_transports", 00:09:13.590 "vhost_controller_set_coalescing", 00:09:13.590 "vhost_get_controllers", 00:09:13.590 "vhost_delete_controller", 00:09:13.590 "vhost_create_blk_controller", 00:09:13.590 "vhost_scsi_controller_remove_target", 00:09:13.590 "vhost_scsi_controller_add_target", 00:09:13.590 "vhost_start_scsi_controller", 00:09:13.590 "vhost_create_scsi_controller", 00:09:13.590 "thread_set_cpumask", 00:09:13.590 "scheduler_set_options", 00:09:13.590 "framework_get_governor", 00:09:13.590 "framework_get_scheduler", 00:09:13.590 "framework_set_scheduler", 00:09:13.590 "framework_get_reactors", 00:09:13.590 "thread_get_io_channels", 00:09:13.590 "thread_get_pollers", 00:09:13.590 "thread_get_stats", 00:09:13.590 "framework_monitor_context_switch", 00:09:13.590 "spdk_kill_instance", 00:09:13.590 "log_enable_timestamps", 00:09:13.590 "log_get_flags", 00:09:13.590 "log_clear_flag", 00:09:13.590 "log_set_flag", 00:09:13.590 "log_get_level", 00:09:13.590 "log_set_level", 00:09:13.590 "log_get_print_level", 00:09:13.590 "log_set_print_level", 00:09:13.590 "framework_enable_cpumask_locks", 00:09:13.590 "framework_disable_cpumask_locks", 00:09:13.590 "framework_wait_init", 00:09:13.590 "framework_start_init", 00:09:13.590 "scsi_get_devices", 00:09:13.590 "bdev_get_histogram", 00:09:13.590 "bdev_enable_histogram", 00:09:13.590 "bdev_set_qos_limit", 00:09:13.590 "bdev_set_qd_sampling_period", 00:09:13.590 "bdev_get_bdevs", 00:09:13.590 "bdev_reset_iostat", 00:09:13.590 "bdev_get_iostat", 00:09:13.590 "bdev_examine", 00:09:13.590 "bdev_wait_for_examine", 00:09:13.590 "bdev_set_options", 00:09:13.590 "accel_get_stats", 00:09:13.590 "accel_set_options", 00:09:13.590 "accel_set_driver", 00:09:13.590 "accel_crypto_key_destroy", 00:09:13.590 "accel_crypto_keys_get", 00:09:13.590 "accel_crypto_key_create", 00:09:13.590 "accel_assign_opc", 00:09:13.590 "accel_get_module_info", 00:09:13.590 "accel_get_opc_assignments", 00:09:13.590 "vmd_rescan", 00:09:13.590 "vmd_remove_device", 00:09:13.590 "vmd_enable", 00:09:13.590 "sock_get_default_impl", 00:09:13.590 "sock_set_default_impl", 00:09:13.590 "sock_impl_set_options", 00:09:13.590 "sock_impl_get_options", 00:09:13.590 "iobuf_get_stats", 00:09:13.590 "iobuf_set_options", 00:09:13.590 "keyring_get_keys", 00:09:13.590 "framework_get_pci_devices", 00:09:13.590 "framework_get_config", 00:09:13.590 "framework_get_subsystems", 00:09:13.590 "fsdev_set_opts", 00:09:13.590 "fsdev_get_opts", 00:09:13.590 "trace_get_info", 00:09:13.590 "trace_get_tpoint_group_mask", 00:09:13.590 "trace_disable_tpoint_group", 00:09:13.590 "trace_enable_tpoint_group", 00:09:13.590 "trace_clear_tpoint_mask", 00:09:13.590 "trace_set_tpoint_mask", 00:09:13.590 "notify_get_notifications", 00:09:13.590 "notify_get_types", 00:09:13.590 "spdk_get_version", 00:09:13.590 "rpc_get_methods" 00:09:13.590 ] 00:09:13.590 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:13.590 15:36:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.590 15:36:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.849 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:13.849 15:36:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71038 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71038 ']' 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71038 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71038 00:09:13.849 killing process with pid 71038 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71038' 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71038 00:09:13.849 15:36:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71038 00:09:14.418 ************************************ 00:09:14.418 END TEST spdkcli_tcp 00:09:14.418 ************************************ 00:09:14.418 00:09:14.418 real 0m2.273s 00:09:14.418 user 0m3.990s 00:09:14.418 sys 0m0.709s 00:09:14.418 15:36:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.418 15:36:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.418 15:36:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:14.418 15:36:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.418 15:36:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.418 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:09:14.418 ************************************ 00:09:14.418 START TEST dpdk_mem_utility 00:09:14.418 ************************************ 00:09:14.418 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:14.418 * Looking for test storage... 00:09:14.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:14.418 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.418 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.418 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:14.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.677 15:36:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.677 --rc genhtml_branch_coverage=1 00:09:14.677 --rc genhtml_function_coverage=1 00:09:14.677 --rc genhtml_legend=1 00:09:14.677 --rc geninfo_all_blocks=1 00:09:14.677 --rc geninfo_unexecuted_blocks=1 00:09:14.677 00:09:14.677 ' 00:09:14.677 15:36:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:14.677 15:36:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71138 00:09:14.677 15:36:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71138 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71138 ']' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.677 15:36:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.677 15:36:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:14.677 [2024-12-06 15:36:03.333126] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:14.677 [2024-12-06 15:36:03.333645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:09:14.936 [2024-12-06 15:36:03.500296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.936 [2024-12-06 15:36:03.573706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.872 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.872 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:15.872 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:15.872 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:15.872 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.872 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:15.872 { 00:09:15.872 "filename": "/tmp/spdk_mem_dump.txt" 00:09:15.872 } 00:09:15.872 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.872 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:15.872 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:15.872 1 heaps totaling size 818.000000 MiB 00:09:15.872 size: 818.000000 MiB heap id: 0 00:09:15.872 end heaps---------- 00:09:15.872 9 mempools totaling size 603.782043 MiB 00:09:15.872 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:15.872 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:15.872 size: 100.555481 MiB name: bdev_io_71138 00:09:15.872 size: 50.003479 MiB name: msgpool_71138 00:09:15.872 size: 36.509338 MiB name: fsdev_io_71138 00:09:15.872 size: 21.763794 MiB name: PDU_Pool 00:09:15.872 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:15.872 size: 4.133484 MiB name: evtpool_71138 00:09:15.872 size: 0.026123 MiB name: Session_Pool 00:09:15.872 end mempools------- 00:09:15.872 6 memzones totaling size 4.142822 MiB 00:09:15.872 size: 1.000366 MiB name: RG_ring_0_71138 00:09:15.872 size: 1.000366 MiB name: RG_ring_1_71138 00:09:15.872 size: 1.000366 MiB name: RG_ring_4_71138 00:09:15.872 size: 1.000366 MiB name: RG_ring_5_71138 00:09:15.872 size: 0.125366 MiB name: RG_ring_2_71138 00:09:15.872 size: 0.015991 MiB name: RG_ring_3_71138 00:09:15.872 end memzones------- 00:09:15.872 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:15.872 heap id: 0 total size: 818.000000 MiB number of busy elements: 312 number of free elements: 15 00:09:15.872 list of free elements. size: 10.803406 MiB 00:09:15.872 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:15.872 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:15.872 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:15.872 element at address: 0x200000400000 with size: 0.993958 MiB 00:09:15.872 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:15.872 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:15.872 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:15.872 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:15.872 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:09:15.872 element at address: 0x20000a600000 with size: 0.488892 MiB 00:09:15.872 element at address: 0x200000c00000 with size: 0.486267 MiB 00:09:15.872 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:15.872 element at address: 0x200003e00000 with size: 0.480286 MiB 00:09:15.872 element at address: 0x200028200000 with size: 0.395935 MiB 00:09:15.872 element at address: 0x200000800000 with size: 0.351746 MiB 00:09:15.872 list of standard malloc elements. size: 199.267700 MiB 00:09:15.872 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:15.872 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:15.872 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:15.872 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:15.872 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:15.872 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:15.872 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:15.872 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:15.872 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:15.872 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:15.872 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000085e580 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087e840 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087e900 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f080 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f140 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f200 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f380 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f440 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f500 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:09:15.873 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:15.874 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x200028265680 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c280 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c480 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c540 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c600 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c780 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c840 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c900 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d080 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d140 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d200 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d380 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d440 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d500 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d680 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d740 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d800 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826d980 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826da40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826db00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826de00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826df80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e040 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e100 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e280 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e340 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e400 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e580 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e640 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e700 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e880 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826e940 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f000 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f180 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f240 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f300 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f480 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f540 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f600 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f780 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f840 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f900 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:15.874 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:15.874 list of memzone associated elements. size: 607.928894 MiB 00:09:15.874 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:15.874 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:15.874 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:15.874 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:15.874 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:15.874 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71138_0 00:09:15.874 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:15.874 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71138_0 00:09:15.874 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:15.874 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71138_0 00:09:15.874 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:15.874 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:15.874 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:15.874 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:15.874 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:15.874 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71138_0 00:09:15.874 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:15.874 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71138 00:09:15.874 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:15.874 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71138 00:09:15.874 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:15.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:15.874 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:15.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:15.874 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:15.874 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:15.875 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:15.875 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:15.875 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:15.875 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71138 00:09:15.875 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:15.875 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71138 00:09:15.875 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:15.875 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71138 00:09:15.875 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:15.875 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71138 00:09:15.875 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:15.875 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71138 00:09:15.875 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:15.875 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71138 00:09:15.875 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:15.875 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:15.875 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:15.875 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:15.875 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:15.875 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:15.875 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:15.875 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71138 00:09:15.875 element at address: 0x20000085e640 with size: 0.125488 MiB 00:09:15.875 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71138 00:09:15.875 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:15.875 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:15.875 element at address: 0x200028265740 with size: 0.023743 MiB 00:09:15.875 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:15.875 element at address: 0x20000085a380 with size: 0.016113 MiB 00:09:15.875 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71138 00:09:15.875 element at address: 0x20002826b880 with size: 0.002441 MiB 00:09:15.875 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:15.875 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:09:15.875 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71138 00:09:15.875 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:15.875 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71138 00:09:15.875 element at address: 0x20000085a180 with size: 0.000305 MiB 00:09:15.875 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71138 00:09:15.875 element at address: 0x20002826c340 with size: 0.000305 MiB 00:09:15.875 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:15.875 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:15.875 15:36:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71138 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71138 ']' 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71138 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71138 00:09:15.875 killing process with pid 71138 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71138' 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71138 00:09:15.875 15:36:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71138 00:09:16.442 ************************************ 00:09:16.442 END TEST dpdk_mem_utility 00:09:16.442 ************************************ 00:09:16.442 00:09:16.442 real 0m2.041s 00:09:16.442 user 0m2.013s 00:09:16.442 sys 0m0.650s 00:09:16.442 15:36:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.442 15:36:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:16.442 15:36:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:16.442 15:36:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.442 15:36:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.442 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:09:16.442 ************************************ 00:09:16.442 START TEST event 00:09:16.442 ************************************ 00:09:16.442 15:36:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:16.699 * Looking for test storage... 00:09:16.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.699 15:36:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.699 15:36:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.699 15:36:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.699 15:36:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.699 15:36:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.699 15:36:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.699 15:36:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.699 15:36:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.699 15:36:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.699 15:36:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.699 15:36:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.699 15:36:05 event -- scripts/common.sh@344 -- # case "$op" in 00:09:16.699 15:36:05 event -- scripts/common.sh@345 -- # : 1 00:09:16.699 15:36:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.699 15:36:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.699 15:36:05 event -- scripts/common.sh@365 -- # decimal 1 00:09:16.699 15:36:05 event -- scripts/common.sh@353 -- # local d=1 00:09:16.699 15:36:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.699 15:36:05 event -- scripts/common.sh@355 -- # echo 1 00:09:16.699 15:36:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.699 15:36:05 event -- scripts/common.sh@366 -- # decimal 2 00:09:16.699 15:36:05 event -- scripts/common.sh@353 -- # local d=2 00:09:16.699 15:36:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.699 15:36:05 event -- scripts/common.sh@355 -- # echo 2 00:09:16.699 15:36:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.699 15:36:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.699 15:36:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.699 15:36:05 event -- scripts/common.sh@368 -- # return 0 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.699 --rc genhtml_branch_coverage=1 00:09:16.699 --rc genhtml_function_coverage=1 00:09:16.699 --rc genhtml_legend=1 00:09:16.699 --rc geninfo_all_blocks=1 00:09:16.699 --rc geninfo_unexecuted_blocks=1 00:09:16.699 00:09:16.699 ' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.699 --rc genhtml_branch_coverage=1 00:09:16.699 --rc genhtml_function_coverage=1 00:09:16.699 --rc genhtml_legend=1 00:09:16.699 --rc geninfo_all_blocks=1 00:09:16.699 --rc geninfo_unexecuted_blocks=1 00:09:16.699 00:09:16.699 ' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.699 --rc genhtml_branch_coverage=1 00:09:16.699 --rc genhtml_function_coverage=1 00:09:16.699 --rc genhtml_legend=1 00:09:16.699 --rc geninfo_all_blocks=1 00:09:16.699 --rc geninfo_unexecuted_blocks=1 00:09:16.699 00:09:16.699 ' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.699 --rc genhtml_branch_coverage=1 00:09:16.699 --rc genhtml_function_coverage=1 00:09:16.699 --rc genhtml_legend=1 00:09:16.699 --rc geninfo_all_blocks=1 00:09:16.699 --rc geninfo_unexecuted_blocks=1 00:09:16.699 00:09:16.699 ' 00:09:16.699 15:36:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:16.699 15:36:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:16.699 15:36:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:16.699 15:36:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.699 15:36:05 event -- common/autotest_common.sh@10 -- # set +x 00:09:16.699 ************************************ 00:09:16.699 START TEST event_perf 00:09:16.699 ************************************ 00:09:16.699 15:36:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:16.699 Running I/O for 1 seconds...[2024-12-06 15:36:05.339081] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:16.699 [2024-12-06 15:36:05.339439] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71224 ] 00:09:16.957 [2024-12-06 15:36:05.499964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.957 [2024-12-06 15:36:05.573744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.957 [2024-12-06 15:36:05.574000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.957 [2024-12-06 15:36:05.573986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.957 Running I/O for 1 seconds...[2024-12-06 15:36:05.574072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.329 00:09:18.329 lcore 0: 132074 00:09:18.329 lcore 1: 132072 00:09:18.329 lcore 2: 132071 00:09:18.329 lcore 3: 132073 00:09:18.329 done. 00:09:18.329 00:09:18.329 real 0m1.358s 00:09:18.329 user 0m4.127s 00:09:18.329 sys 0m0.108s 00:09:18.329 15:36:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.329 ************************************ 00:09:18.329 END TEST event_perf 00:09:18.329 ************************************ 00:09:18.329 15:36:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:18.329 15:36:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:18.329 15:36:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.329 15:36:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.329 15:36:06 event -- common/autotest_common.sh@10 -- # set +x 00:09:18.329 ************************************ 00:09:18.329 START TEST event_reactor 00:09:18.329 ************************************ 00:09:18.329 15:36:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:18.329 [2024-12-06 15:36:06.748649] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:18.329 [2024-12-06 15:36:06.748832] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71258 ] 00:09:18.329 [2024-12-06 15:36:06.899292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.329 [2024-12-06 15:36:06.969278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.703 test_start 00:09:19.703 oneshot 00:09:19.703 tick 100 00:09:19.703 tick 100 00:09:19.703 tick 250 00:09:19.703 tick 100 00:09:19.703 tick 100 00:09:19.703 tick 250 00:09:19.703 tick 100 00:09:19.703 tick 500 00:09:19.703 tick 100 00:09:19.703 tick 100 00:09:19.703 tick 250 00:09:19.703 tick 100 00:09:19.703 tick 100 00:09:19.703 test_end 00:09:19.703 ************************************ 00:09:19.703 END TEST event_reactor 00:09:19.703 ************************************ 00:09:19.703 00:09:19.703 real 0m1.332s 00:09:19.703 user 0m1.139s 00:09:19.703 sys 0m0.083s 00:09:19.703 15:36:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.703 15:36:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:19.703 15:36:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:19.703 15:36:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.703 15:36:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.703 15:36:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:19.703 ************************************ 00:09:19.703 START TEST event_reactor_perf 00:09:19.703 ************************************ 00:09:19.703 15:36:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:19.703 [2024-12-06 15:36:08.137321] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:19.703 [2024-12-06 15:36:08.137749] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71302 ] 00:09:19.703 [2024-12-06 15:36:08.300211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.703 [2024-12-06 15:36:08.374279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.078 test_start 00:09:21.078 test_end 00:09:21.078 Performance: 274610 events per second 00:09:21.078 00:09:21.078 real 0m1.352s 00:09:21.078 user 0m1.157s 00:09:21.078 sys 0m0.086s 00:09:21.078 ************************************ 00:09:21.078 END TEST event_reactor_perf 00:09:21.078 ************************************ 00:09:21.078 15:36:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.078 15:36:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:21.078 15:36:09 event -- event/event.sh@49 -- # uname -s 00:09:21.078 15:36:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:21.078 15:36:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:21.078 15:36:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.078 15:36:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.078 15:36:09 event -- common/autotest_common.sh@10 -- # set +x 00:09:21.078 ************************************ 00:09:21.078 START TEST event_scheduler 00:09:21.078 ************************************ 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:21.078 * Looking for test storage... 00:09:21.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:21.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.078 15:36:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.078 --rc genhtml_branch_coverage=1 00:09:21.078 --rc genhtml_function_coverage=1 00:09:21.078 --rc genhtml_legend=1 00:09:21.078 --rc geninfo_all_blocks=1 00:09:21.078 --rc geninfo_unexecuted_blocks=1 00:09:21.078 00:09:21.078 ' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.078 --rc genhtml_branch_coverage=1 00:09:21.078 --rc genhtml_function_coverage=1 00:09:21.078 --rc genhtml_legend=1 00:09:21.078 --rc geninfo_all_blocks=1 00:09:21.078 --rc geninfo_unexecuted_blocks=1 00:09:21.078 00:09:21.078 ' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.078 --rc genhtml_branch_coverage=1 00:09:21.078 --rc genhtml_function_coverage=1 00:09:21.078 --rc genhtml_legend=1 00:09:21.078 --rc geninfo_all_blocks=1 00:09:21.078 --rc geninfo_unexecuted_blocks=1 00:09:21.078 00:09:21.078 ' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.078 --rc genhtml_branch_coverage=1 00:09:21.078 --rc genhtml_function_coverage=1 00:09:21.078 --rc genhtml_legend=1 00:09:21.078 --rc geninfo_all_blocks=1 00:09:21.078 --rc geninfo_unexecuted_blocks=1 00:09:21.078 00:09:21.078 ' 00:09:21.078 15:36:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:21.078 15:36:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71367 00:09:21.078 15:36:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:21.078 15:36:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71367 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71367 ']' 00:09:21.078 15:36:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.078 15:36:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:21.337 [2024-12-06 15:36:09.814101] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:21.337 [2024-12-06 15:36:09.814834] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:09:21.337 [2024-12-06 15:36:09.976426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.596 [2024-12-06 15:36:10.052314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.596 [2024-12-06 15:36:10.052505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.596 [2024-12-06 15:36:10.052594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.596 [2024-12-06 15:36:10.052646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:22.164 15:36:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:22.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:22.164 POWER: Cannot set governor of lcore 0 to userspace 00:09:22.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:22.164 POWER: Cannot set governor of lcore 0 to performance 00:09:22.164 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:22.164 POWER: Cannot set governor of lcore 0 to userspace 00:09:22.164 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:22.164 POWER: Unable to set Power Management Environment for lcore 0 00:09:22.164 [2024-12-06 15:36:10.808438] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:22.164 [2024-12-06 15:36:10.808477] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:22.164 [2024-12-06 15:36:10.808511] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:22.164 [2024-12-06 15:36:10.808560] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:22.164 [2024-12-06 15:36:10.808576] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:22.164 [2024-12-06 15:36:10.808594] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.164 15:36:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.164 15:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 [2024-12-06 15:36:10.946886] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:22.423 15:36:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:22.423 15:36:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.423 15:36:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 ************************************ 00:09:22.423 START TEST scheduler_create_thread 00:09:22.423 ************************************ 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 2 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 3 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 4 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 5 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 6 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 7 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 8 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 9 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 10 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.423 15:36:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:23.833 15:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.833 15:36:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:23.833 15:36:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:23.833 15:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.833 15:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:25.205 ************************************ 00:09:25.205 END TEST scheduler_create_thread 00:09:25.205 ************************************ 00:09:25.205 15:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.205 00:09:25.205 real 0m2.613s 00:09:25.205 user 0m0.020s 00:09:25.205 sys 0m0.005s 00:09:25.205 15:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.205 15:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:25.205 15:36:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:25.205 15:36:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71367 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71367 ']' 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71367 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71367 00:09:25.205 killing process with pid 71367 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71367' 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71367 00:09:25.205 15:36:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71367 00:09:25.462 [2024-12-06 15:36:14.053603] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:26.027 00:09:26.027 real 0m4.906s 00:09:26.027 user 0m8.988s 00:09:26.027 sys 0m0.552s 00:09:26.027 15:36:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.027 ************************************ 00:09:26.027 END TEST event_scheduler 00:09:26.027 ************************************ 00:09:26.027 15:36:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 15:36:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:26.027 15:36:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:26.027 15:36:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.027 15:36:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.027 15:36:14 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 ************************************ 00:09:26.027 START TEST app_repeat 00:09:26.027 ************************************ 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71473 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:26.027 Process app_repeat pid: 71473 00:09:26.027 spdk_app_start Round 0 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71473' 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:26.027 15:36:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71473 /var/tmp/spdk-nbd.sock 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71473 ']' 00:09:26.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.027 15:36:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 [2024-12-06 15:36:14.532056] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:26.027 [2024-12-06 15:36:14.532244] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71473 ] 00:09:26.027 [2024-12-06 15:36:14.683056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.284 [2024-12-06 15:36:14.731359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.284 [2024-12-06 15:36:14.731390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.285 15:36:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.285 15:36:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:26.285 15:36:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:26.543 Malloc0 00:09:26.543 15:36:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:26.817 Malloc1 00:09:27.074 15:36:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.074 15:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:27.331 /dev/nbd0 00:09:27.331 15:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:27.331 15:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.331 1+0 records in 00:09:27.331 1+0 records out 00:09:27.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261565 s, 15.7 MB/s 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:27.331 15:36:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:27.331 15:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.331 15:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.331 15:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:27.590 /dev/nbd1 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:27.590 1+0 records in 00:09:27.590 1+0 records out 00:09:27.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463931 s, 8.8 MB/s 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:27.590 15:36:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.590 15:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.849 15:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:27.849 { 00:09:27.849 "nbd_device": "/dev/nbd0", 00:09:27.849 "bdev_name": "Malloc0" 00:09:27.849 }, 00:09:27.849 { 00:09:27.849 "nbd_device": "/dev/nbd1", 00:09:27.849 "bdev_name": "Malloc1" 00:09:27.849 } 00:09:27.849 ]' 00:09:27.849 15:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:27.849 { 00:09:27.849 "nbd_device": "/dev/nbd0", 00:09:27.849 "bdev_name": "Malloc0" 00:09:27.849 }, 00:09:27.849 { 00:09:27.849 "nbd_device": "/dev/nbd1", 00:09:27.849 "bdev_name": "Malloc1" 00:09:27.849 } 00:09:27.849 ]' 00:09:27.849 15:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:28.117 /dev/nbd1' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:28.117 /dev/nbd1' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:28.117 256+0 records in 00:09:28.117 256+0 records out 00:09:28.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109359 s, 95.9 MB/s 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:28.117 256+0 records in 00:09:28.117 256+0 records out 00:09:28.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309221 s, 33.9 MB/s 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:28.117 256+0 records in 00:09:28.117 256+0 records out 00:09:28.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341524 s, 30.7 MB/s 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.117 15:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.375 15:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.633 15:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:29.199 15:36:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:29.199 15:36:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:29.456 15:36:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:29.714 [2024-12-06 15:36:18.242439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.714 [2024-12-06 15:36:18.310661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.714 [2024-12-06 15:36:18.310667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.714 [2024-12-06 15:36:18.386100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:29.714 [2024-12-06 15:36:18.386214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:32.995 spdk_app_start Round 1 00:09:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:32.995 15:36:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:32.995 15:36:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:32.995 15:36:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71473 /var/tmp/spdk-nbd.sock 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71473 ']' 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.995 15:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:32.995 15:36:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.995 15:36:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:32.996 15:36:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.996 Malloc0 00:09:32.996 15:36:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:33.253 Malloc1 00:09:33.253 15:36:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.253 15:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:33.511 /dev/nbd0 00:09:33.511 15:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:33.511 15:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.511 1+0 records in 00:09:33.511 1+0 records out 00:09:33.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276082 s, 14.8 MB/s 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:33.511 15:36:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:33.511 15:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.511 15:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.511 15:36:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:34.077 /dev/nbd1 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:34.077 1+0 records in 00:09:34.077 1+0 records out 00:09:34.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469684 s, 8.7 MB/s 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:34.077 15:36:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.077 15:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:34.412 { 00:09:34.412 "nbd_device": "/dev/nbd0", 00:09:34.412 "bdev_name": "Malloc0" 00:09:34.412 }, 00:09:34.412 { 00:09:34.412 "nbd_device": "/dev/nbd1", 00:09:34.412 "bdev_name": "Malloc1" 00:09:34.412 } 00:09:34.412 ]' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:34.412 { 00:09:34.412 "nbd_device": "/dev/nbd0", 00:09:34.412 "bdev_name": "Malloc0" 00:09:34.412 }, 00:09:34.412 { 00:09:34.412 "nbd_device": "/dev/nbd1", 00:09:34.412 "bdev_name": "Malloc1" 00:09:34.412 } 00:09:34.412 ]' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:34.412 /dev/nbd1' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:34.412 /dev/nbd1' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:34.412 256+0 records in 00:09:34.412 256+0 records out 00:09:34.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010699 s, 98.0 MB/s 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:34.412 256+0 records in 00:09:34.412 256+0 records out 00:09:34.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028997 s, 36.2 MB/s 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.412 15:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:34.412 256+0 records in 00:09:34.412 256+0 records out 00:09:34.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0372365 s, 28.2 MB/s 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.412 15:36:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.986 15:36:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.245 15:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:35.505 15:36:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:35.505 15:36:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:36.074 15:36:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:36.074 [2024-12-06 15:36:24.750939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.333 [2024-12-06 15:36:24.814628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.333 [2024-12-06 15:36:24.814660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.333 [2024-12-06 15:36:24.899826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:36.333 [2024-12-06 15:36:24.899988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:38.862 spdk_app_start Round 2 00:09:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:38.862 15:36:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:38.862 15:36:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:38.862 15:36:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71473 /var/tmp/spdk-nbd.sock 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71473 ']' 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.862 15:36:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:39.122 15:36:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.122 15:36:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:39.122 15:36:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.689 Malloc0 00:09:39.689 15:36:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.948 Malloc1 00:09:39.948 15:36:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.948 15:36:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:40.208 /dev/nbd0 00:09:40.208 15:36:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:40.208 15:36:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.208 1+0 records in 00:09:40.208 1+0 records out 00:09:40.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339952 s, 12.0 MB/s 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.208 15:36:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.208 15:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.208 15:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.208 15:36:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:40.489 /dev/nbd1 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.489 1+0 records in 00:09:40.489 1+0 records out 00:09:40.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248412 s, 16.5 MB/s 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.489 15:36:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.489 15:36:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.752 { 00:09:40.752 "nbd_device": "/dev/nbd0", 00:09:40.752 "bdev_name": "Malloc0" 00:09:40.752 }, 00:09:40.752 { 00:09:40.752 "nbd_device": "/dev/nbd1", 00:09:40.752 "bdev_name": "Malloc1" 00:09:40.752 } 00:09:40.752 ]' 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.752 { 00:09:40.752 "nbd_device": "/dev/nbd0", 00:09:40.752 "bdev_name": "Malloc0" 00:09:40.752 }, 00:09:40.752 { 00:09:40.752 "nbd_device": "/dev/nbd1", 00:09:40.752 "bdev_name": "Malloc1" 00:09:40.752 } 00:09:40.752 ]' 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:40.752 /dev/nbd1' 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:40.752 /dev/nbd1' 00:09:40.752 15:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:41.010 256+0 records in 00:09:41.010 256+0 records out 00:09:41.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00929149 s, 113 MB/s 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:41.010 256+0 records in 00:09:41.010 256+0 records out 00:09:41.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301048 s, 34.8 MB/s 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:41.010 256+0 records in 00:09:41.010 256+0 records out 00:09:41.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327411 s, 32.0 MB/s 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.010 15:36:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.268 15:36:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.834 15:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.095 15:36:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.095 15:36:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.353 15:36:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:42.610 [2024-12-06 15:36:31.167451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.610 [2024-12-06 15:36:31.228337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.610 [2024-12-06 15:36:31.228322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.610 [2024-12-06 15:36:31.293089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.610 [2024-12-06 15:36:31.293185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:45.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:45.892 15:36:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71473 /var/tmp/spdk-nbd.sock 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71473 ']' 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.892 15:36:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:45.892 15:36:34 event.app_repeat -- event/event.sh@39 -- # killprocess 71473 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71473 ']' 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71473 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71473 00:09:45.892 killing process with pid 71473 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71473' 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71473 00:09:45.892 15:36:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71473 00:09:45.892 spdk_app_start is called in Round 0. 00:09:45.892 Shutdown signal received, stop current app iteration 00:09:45.892 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:09:45.892 spdk_app_start is called in Round 1. 00:09:45.892 Shutdown signal received, stop current app iteration 00:09:45.892 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:09:45.892 spdk_app_start is called in Round 2. 00:09:45.892 Shutdown signal received, stop current app iteration 00:09:45.892 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:09:45.892 spdk_app_start is called in Round 3. 00:09:45.892 Shutdown signal received, stop current app iteration 00:09:46.151 15:36:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:46.151 15:36:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:46.151 00:09:46.151 real 0m20.116s 00:09:46.151 user 0m45.936s 00:09:46.151 sys 0m3.268s 00:09:46.151 ************************************ 00:09:46.151 END TEST app_repeat 00:09:46.151 ************************************ 00:09:46.151 15:36:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.151 15:36:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.151 15:36:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:46.151 15:36:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:46.151 15:36:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.151 15:36:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.151 15:36:34 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.151 ************************************ 00:09:46.151 START TEST cpu_locks 00:09:46.151 ************************************ 00:09:46.151 15:36:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:46.151 * Looking for test storage... 00:09:46.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:46.151 15:36:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.151 15:36:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.151 15:36:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.151 15:36:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.151 15:36:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.410 15:36:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.410 --rc genhtml_branch_coverage=1 00:09:46.410 --rc genhtml_function_coverage=1 00:09:46.410 --rc genhtml_legend=1 00:09:46.410 --rc geninfo_all_blocks=1 00:09:46.410 --rc geninfo_unexecuted_blocks=1 00:09:46.410 00:09:46.410 ' 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.410 --rc genhtml_branch_coverage=1 00:09:46.410 --rc genhtml_function_coverage=1 00:09:46.410 --rc genhtml_legend=1 00:09:46.410 --rc geninfo_all_blocks=1 00:09:46.410 --rc geninfo_unexecuted_blocks=1 00:09:46.410 00:09:46.410 ' 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.410 --rc genhtml_branch_coverage=1 00:09:46.410 --rc genhtml_function_coverage=1 00:09:46.410 --rc genhtml_legend=1 00:09:46.410 --rc geninfo_all_blocks=1 00:09:46.410 --rc geninfo_unexecuted_blocks=1 00:09:46.410 00:09:46.410 ' 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.410 --rc genhtml_branch_coverage=1 00:09:46.410 --rc genhtml_function_coverage=1 00:09:46.410 --rc genhtml_legend=1 00:09:46.410 --rc geninfo_all_blocks=1 00:09:46.410 --rc geninfo_unexecuted_blocks=1 00:09:46.410 00:09:46.410 ' 00:09:46.410 15:36:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:46.410 15:36:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:46.410 15:36:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:46.410 15:36:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.410 15:36:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 ************************************ 00:09:46.410 START TEST default_locks 00:09:46.410 ************************************ 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71930 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71930 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71930 ']' 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.410 15:36:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-12-06 15:36:35.001532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:46.410 [2024-12-06 15:36:35.001754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71930 ] 00:09:46.668 [2024-12-06 15:36:35.168340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.668 [2024-12-06 15:36:35.219124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.598 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.598 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:47.598 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71930 00:09:47.598 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71930 00:09:47.598 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:47.858 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71930 00:09:47.858 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71930 ']' 00:09:47.858 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71930 00:09:47.858 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71930 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.859 killing process with pid 71930 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71930' 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71930 00:09:47.859 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71930 00:09:48.425 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71930 00:09:48.425 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:48.425 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71930 00:09:48.425 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71930 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71930 ']' 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 ERROR: process (pid: 71930) is no longer running 00:09:48.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71930) - No such process 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:48.426 00:09:48.426 real 0m2.063s 00:09:48.426 user 0m2.168s 00:09:48.426 sys 0m0.698s 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.426 15:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 ************************************ 00:09:48.426 END TEST default_locks 00:09:48.426 ************************************ 00:09:48.426 15:36:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:48.426 15:36:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.426 15:36:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.426 15:36:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 ************************************ 00:09:48.426 START TEST default_locks_via_rpc 00:09:48.426 ************************************ 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71984 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71984 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71984 ']' 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.426 15:36:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.426 [2024-12-06 15:36:37.108962] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:48.426 [2024-12-06 15:36:37.109210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71984 ] 00:09:48.684 [2024-12-06 15:36:37.272627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.684 [2024-12-06 15:36:37.320064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71984 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71984 00:09:49.619 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71984 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71984 ']' 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71984 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.878 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71984 00:09:50.138 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.138 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.138 killing process with pid 71984 00:09:50.138 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71984' 00:09:50.138 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71984 00:09:50.138 15:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71984 00:09:50.707 00:09:50.707 real 0m2.133s 00:09:50.707 user 0m2.208s 00:09:50.707 sys 0m0.726s 00:09:50.707 15:36:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.707 15:36:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.707 ************************************ 00:09:50.707 END TEST default_locks_via_rpc 00:09:50.707 ************************************ 00:09:50.707 15:36:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:50.707 15:36:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.707 15:36:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.707 15:36:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.707 ************************************ 00:09:50.707 START TEST non_locking_app_on_locked_coremask 00:09:50.707 ************************************ 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72036 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72036 /var/tmp/spdk.sock 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72036 ']' 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.707 15:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.707 [2024-12-06 15:36:39.302361] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:50.707 [2024-12-06 15:36:39.302620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72036 ] 00:09:50.966 [2024-12-06 15:36:39.462216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.966 [2024-12-06 15:36:39.517214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72052 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72052 /var/tmp/spdk2.sock 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72052 ']' 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.902 15:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.902 [2024-12-06 15:36:40.427686] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:51.902 [2024-12-06 15:36:40.427926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72052 ] 00:09:52.161 [2024-12-06 15:36:40.608379] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:52.161 [2024-12-06 15:36:40.608483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.161 [2024-12-06 15:36:40.718275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.748 15:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.748 15:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:52.748 15:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72036 00:09:52.748 15:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:52.748 15:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72036 ']' 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.683 killing process with pid 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72036' 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72036 00:09:53.683 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72036 00:09:54.251 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72052 00:09:54.251 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72052 ']' 00:09:54.251 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72052 00:09:54.251 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:54.251 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72052 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72052' 00:09:54.510 killing process with pid 72052 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72052 00:09:54.510 15:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72052 00:09:55.169 00:09:55.169 real 0m4.380s 00:09:55.169 user 0m4.680s 00:09:55.169 sys 0m1.352s 00:09:55.169 15:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.169 ************************************ 00:09:55.169 END TEST non_locking_app_on_locked_coremask 00:09:55.169 ************************************ 00:09:55.169 15:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.169 15:36:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:55.169 15:36:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.169 15:36:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.169 15:36:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.169 ************************************ 00:09:55.169 START TEST locking_app_on_unlocked_coremask 00:09:55.169 ************************************ 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72121 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72121 /var/tmp/spdk.sock 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72121 ']' 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.169 15:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.169 [2024-12-06 15:36:43.746851] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:55.169 [2024-12-06 15:36:43.747077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72121 ] 00:09:55.454 [2024-12-06 15:36:43.905455] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:55.454 [2024-12-06 15:36:43.905532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.454 [2024-12-06 15:36:43.938280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.029 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.029 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72137 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72137 /var/tmp/spdk2.sock 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72137 ']' 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.287 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.288 15:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.288 [2024-12-06 15:36:44.858523] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:56.288 [2024-12-06 15:36:44.858736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72137 ] 00:09:56.545 [2024-12-06 15:36:45.034488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.545 [2024-12-06 15:36:45.135282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.112 15:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.112 15:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:57.112 15:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72137 00:09:57.112 15:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.112 15:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72137 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72121 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72121 ']' 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72121 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72121 00:09:58.046 killing process with pid 72121 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72121' 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72121 00:09:58.046 15:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72121 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72137 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72137 ']' 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72137 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.980 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72137 00:09:59.238 killing process with pid 72137 00:09:59.238 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.238 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.238 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72137' 00:09:59.238 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72137 00:09:59.238 15:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72137 00:09:59.803 00:09:59.803 real 0m4.658s 00:09:59.803 user 0m5.038s 00:09:59.803 sys 0m1.400s 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.803 ************************************ 00:09:59.803 END TEST locking_app_on_unlocked_coremask 00:09:59.803 ************************************ 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.803 15:36:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:59.803 15:36:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.803 15:36:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.803 15:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.803 ************************************ 00:09:59.803 START TEST locking_app_on_locked_coremask 00:09:59.803 ************************************ 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72212 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72212 /var/tmp/spdk.sock 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72212 ']' 00:09:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.803 15:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.803 [2024-12-06 15:36:48.466849] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:59.803 [2024-12-06 15:36:48.467354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72212 ] 00:10:00.061 [2024-12-06 15:36:48.638085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.061 [2024-12-06 15:36:48.688918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72228 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72228 /var/tmp/spdk2.sock 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72228 /var/tmp/spdk2.sock 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72228 /var/tmp/spdk2.sock 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72228 ']' 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.995 15:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.995 [2024-12-06 15:36:49.557147] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:00.995 [2024-12-06 15:36:49.557353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72228 ] 00:10:01.252 [2024-12-06 15:36:49.735696] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72212 has claimed it. 00:10:01.252 [2024-12-06 15:36:49.735787] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:01.817 ERROR: process (pid: 72228) is no longer running 00:10:01.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72228) - No such process 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72212 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72212 00:10:01.817 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72212 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72212 ']' 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72212 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72212 00:10:02.093 killing process with pid 72212 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72212' 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72212 00:10:02.093 15:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72212 00:10:03.028 00:10:03.028 real 0m3.027s 00:10:03.028 user 0m3.372s 00:10:03.028 sys 0m0.923s 00:10:03.028 15:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.028 15:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:03.028 ************************************ 00:10:03.028 END TEST locking_app_on_locked_coremask 00:10:03.028 ************************************ 00:10:03.028 15:36:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:03.028 15:36:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.028 15:36:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.028 15:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.028 ************************************ 00:10:03.028 START TEST locking_overlapped_coremask 00:10:03.028 ************************************ 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72281 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72281 /var/tmp/spdk.sock 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72281 ']' 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.028 15:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:03.028 [2024-12-06 15:36:51.557844] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:03.028 [2024-12-06 15:36:51.558133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72281 ] 00:10:03.286 [2024-12-06 15:36:51.716582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.286 [2024-12-06 15:36:51.784265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.286 [2024-12-06 15:36:51.784412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.286 [2024-12-06 15:36:51.784490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.220 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.220 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:04.220 15:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:04.220 15:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72299 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72299 /var/tmp/spdk2.sock 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72299 /var/tmp/spdk2.sock 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72299 /var/tmp/spdk2.sock 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72299 ']' 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.221 15:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.221 [2024-12-06 15:36:52.669833] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:04.221 [2024-12-06 15:36:52.670012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72299 ] 00:10:04.221 [2024-12-06 15:36:52.862250] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72281 has claimed it. 00:10:04.221 [2024-12-06 15:36:52.862369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:04.788 ERROR: process (pid: 72299) is no longer running 00:10:04.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72299) - No such process 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72281 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72281 ']' 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72281 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72281 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72281' 00:10:04.789 killing process with pid 72281 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72281 00:10:04.789 15:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72281 00:10:05.736 ************************************ 00:10:05.736 END TEST locking_overlapped_coremask 00:10:05.736 ************************************ 00:10:05.736 00:10:05.736 real 0m2.686s 00:10:05.736 user 0m7.270s 00:10:05.736 sys 0m0.784s 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.736 15:36:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:05.736 15:36:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.736 15:36:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.736 15:36:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.736 ************************************ 00:10:05.736 START TEST locking_overlapped_coremask_via_rpc 00:10:05.736 ************************************ 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:05.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72352 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72352 /var/tmp/spdk.sock 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72352 ']' 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.736 15:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.736 [2024-12-06 15:36:54.306740] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:05.736 [2024-12-06 15:36:54.307313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72352 ] 00:10:05.994 [2024-12-06 15:36:54.480046] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:05.994 [2024-12-06 15:36:54.480289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.994 [2024-12-06 15:36:54.563626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.994 [2024-12-06 15:36:54.563737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.994 [2024-12-06 15:36:54.563811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72370 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72370 /var/tmp/spdk2.sock 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72370 ']' 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.935 15:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.935 [2024-12-06 15:36:55.500924] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:06.935 [2024-12-06 15:36:55.501478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72370 ] 00:10:07.193 [2024-12-06 15:36:55.688887] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:07.193 [2024-12-06 15:36:55.688991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.193 [2024-12-06 15:36:55.808007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.193 [2024-12-06 15:36:55.811058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.193 [2024-12-06 15:36:55.811715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.128 [2024-12-06 15:36:56.562204] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72352 has claimed it. 00:10:08.128 request: 00:10:08.128 { 00:10:08.128 "method": "framework_enable_cpumask_locks", 00:10:08.128 "req_id": 1 00:10:08.128 } 00:10:08.128 Got JSON-RPC error response 00:10:08.128 response: 00:10:08.128 { 00:10:08.128 "code": -32603, 00:10:08.128 "message": "Failed to claim CPU core: 2" 00:10:08.128 } 00:10:08.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72352 /var/tmp/spdk.sock 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72352 ']' 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.128 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72370 /var/tmp/spdk2.sock 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72370 ']' 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.386 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.387 15:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:08.645 ************************************ 00:10:08.645 END TEST locking_overlapped_coremask_via_rpc 00:10:08.645 ************************************ 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:08.645 00:10:08.645 real 0m3.103s 00:10:08.645 user 0m1.795s 00:10:08.645 sys 0m0.219s 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.645 15:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.645 15:36:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:08.645 15:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72352 ]] 00:10:08.645 15:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72352 00:10:08.645 15:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72352 ']' 00:10:08.645 15:36:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72352 00:10:08.645 15:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:08.645 15:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.645 15:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72352 00:10:08.903 killing process with pid 72352 00:10:08.903 15:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.903 15:36:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.903 15:36:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72352' 00:10:08.903 15:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72352 00:10:08.903 15:36:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72352 00:10:09.471 15:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72370 ]] 00:10:09.471 15:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72370 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72370 ']' 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72370 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72370 00:10:09.471 killing process with pid 72370 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72370' 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72370 00:10:09.471 15:36:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72370 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72352 ]] 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72352 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72352 ']' 00:10:10.037 Process with pid 72352 is not found 00:10:10.037 Process with pid 72370 is not found 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72352 00:10:10.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72352) - No such process 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72352 is not found' 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72370 ]] 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72370 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72370 ']' 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72370 00:10:10.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72370) - No such process 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72370 is not found' 00:10:10.037 15:36:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:10.037 ************************************ 00:10:10.037 END TEST cpu_locks 00:10:10.037 ************************************ 00:10:10.037 00:10:10.037 real 0m23.969s 00:10:10.037 user 0m42.241s 00:10:10.037 sys 0m7.446s 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.037 15:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.037 00:10:10.037 real 0m53.549s 00:10:10.037 user 1m43.786s 00:10:10.037 sys 0m11.838s 00:10:10.038 15:36:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.038 15:36:58 event -- common/autotest_common.sh@10 -- # set +x 00:10:10.038 ************************************ 00:10:10.038 END TEST event 00:10:10.038 ************************************ 00:10:10.038 15:36:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:10.038 15:36:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.038 15:36:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.038 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:10:10.038 ************************************ 00:10:10.038 START TEST thread 00:10:10.038 ************************************ 00:10:10.038 15:36:58 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:10.296 * Looking for test storage... 00:10:10.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:10.296 15:36:58 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.296 15:36:58 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.296 15:36:58 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.296 15:36:58 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.296 15:36:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.296 15:36:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.296 15:36:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.296 15:36:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.296 15:36:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.296 15:36:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.296 15:36:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.296 15:36:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.296 15:36:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.296 15:36:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.296 15:36:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.296 15:36:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:10.296 15:36:58 thread -- scripts/common.sh@345 -- # : 1 00:10:10.296 15:36:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.296 15:36:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.296 15:36:58 thread -- scripts/common.sh@365 -- # decimal 1 00:10:10.296 15:36:58 thread -- scripts/common.sh@353 -- # local d=1 00:10:10.296 15:36:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.296 15:36:58 thread -- scripts/common.sh@355 -- # echo 1 00:10:10.297 15:36:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.297 15:36:58 thread -- scripts/common.sh@366 -- # decimal 2 00:10:10.297 15:36:58 thread -- scripts/common.sh@353 -- # local d=2 00:10:10.297 15:36:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.297 15:36:58 thread -- scripts/common.sh@355 -- # echo 2 00:10:10.297 15:36:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.297 15:36:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.297 15:36:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.297 15:36:58 thread -- scripts/common.sh@368 -- # return 0 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.297 --rc genhtml_branch_coverage=1 00:10:10.297 --rc genhtml_function_coverage=1 00:10:10.297 --rc genhtml_legend=1 00:10:10.297 --rc geninfo_all_blocks=1 00:10:10.297 --rc geninfo_unexecuted_blocks=1 00:10:10.297 00:10:10.297 ' 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.297 --rc genhtml_branch_coverage=1 00:10:10.297 --rc genhtml_function_coverage=1 00:10:10.297 --rc genhtml_legend=1 00:10:10.297 --rc geninfo_all_blocks=1 00:10:10.297 --rc geninfo_unexecuted_blocks=1 00:10:10.297 00:10:10.297 ' 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.297 --rc genhtml_branch_coverage=1 00:10:10.297 --rc genhtml_function_coverage=1 00:10:10.297 --rc genhtml_legend=1 00:10:10.297 --rc geninfo_all_blocks=1 00:10:10.297 --rc geninfo_unexecuted_blocks=1 00:10:10.297 00:10:10.297 ' 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.297 --rc genhtml_branch_coverage=1 00:10:10.297 --rc genhtml_function_coverage=1 00:10:10.297 --rc genhtml_legend=1 00:10:10.297 --rc geninfo_all_blocks=1 00:10:10.297 --rc geninfo_unexecuted_blocks=1 00:10:10.297 00:10:10.297 ' 00:10:10.297 15:36:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.297 15:36:58 thread -- common/autotest_common.sh@10 -- # set +x 00:10:10.297 ************************************ 00:10:10.297 START TEST thread_poller_perf 00:10:10.297 ************************************ 00:10:10.297 15:36:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:10.297 [2024-12-06 15:36:58.958427] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:10.297 [2024-12-06 15:36:58.958758] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72513 ] 00:10:10.556 [2024-12-06 15:36:59.110581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.556 [2024-12-06 15:36:59.183153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.556 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:11.934 [2024-12-06T15:37:00.627Z] ====================================== 00:10:11.934 [2024-12-06T15:37:00.627Z] busy:2214234126 (cyc) 00:10:11.934 [2024-12-06T15:37:00.627Z] total_run_count: 318000 00:10:11.934 [2024-12-06T15:37:00.627Z] tsc_hz: 2200000000 (cyc) 00:10:11.934 [2024-12-06T15:37:00.627Z] ====================================== 00:10:11.934 [2024-12-06T15:37:00.627Z] poller_cost: 6963 (cyc), 3165 (nsec) 00:10:11.934 00:10:11.934 real 0m1.348s 00:10:11.934 user 0m1.150s 00:10:11.934 sys 0m0.090s 00:10:11.934 15:37:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.934 ************************************ 00:10:11.934 END TEST thread_poller_perf 00:10:11.934 ************************************ 00:10:11.934 15:37:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:11.934 15:37:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:11.934 15:37:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:11.934 15:37:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.934 15:37:00 thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.934 ************************************ 00:10:11.934 START TEST thread_poller_perf 00:10:11.934 ************************************ 00:10:11.934 15:37:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:11.934 [2024-12-06 15:37:00.377918] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:11.934 [2024-12-06 15:37:00.378175] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72550 ] 00:10:11.934 [2024-12-06 15:37:00.540215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.934 [2024-12-06 15:37:00.609031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.934 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:13.312 [2024-12-06T15:37:02.005Z] ====================================== 00:10:13.312 [2024-12-06T15:37:02.005Z] busy:2205232558 (cyc) 00:10:13.312 [2024-12-06T15:37:02.005Z] total_run_count: 3402000 00:10:13.312 [2024-12-06T15:37:02.005Z] tsc_hz: 2200000000 (cyc) 00:10:13.312 [2024-12-06T15:37:02.005Z] ====================================== 00:10:13.312 [2024-12-06T15:37:02.005Z] poller_cost: 648 (cyc), 294 (nsec) 00:10:13.312 00:10:13.312 real 0m1.366s 00:10:13.312 user 0m1.159s 00:10:13.312 sys 0m0.098s 00:10:13.312 15:37:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.312 15:37:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.312 ************************************ 00:10:13.312 END TEST thread_poller_perf 00:10:13.312 ************************************ 00:10:13.312 15:37:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:13.312 00:10:13.312 real 0m3.038s 00:10:13.312 user 0m2.459s 00:10:13.312 sys 0m0.351s 00:10:13.312 ************************************ 00:10:13.312 END TEST thread 00:10:13.312 ************************************ 00:10:13.312 15:37:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.312 15:37:01 thread -- common/autotest_common.sh@10 -- # set +x 00:10:13.312 15:37:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:13.312 15:37:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:13.312 15:37:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.312 15:37:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.312 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:10:13.312 ************************************ 00:10:13.312 START TEST app_cmdline 00:10:13.312 ************************************ 00:10:13.312 15:37:01 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:13.312 * Looking for test storage... 00:10:13.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:13.312 15:37:01 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.312 15:37:01 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.312 15:37:01 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.312 15:37:01 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.312 15:37:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:13.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.572 15:37:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.572 --rc genhtml_branch_coverage=1 00:10:13.572 --rc genhtml_function_coverage=1 00:10:13.572 --rc genhtml_legend=1 00:10:13.572 --rc geninfo_all_blocks=1 00:10:13.572 --rc geninfo_unexecuted_blocks=1 00:10:13.572 00:10:13.572 ' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.572 --rc genhtml_branch_coverage=1 00:10:13.572 --rc genhtml_function_coverage=1 00:10:13.572 --rc genhtml_legend=1 00:10:13.572 --rc geninfo_all_blocks=1 00:10:13.572 --rc geninfo_unexecuted_blocks=1 00:10:13.572 00:10:13.572 ' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.572 --rc genhtml_branch_coverage=1 00:10:13.572 --rc genhtml_function_coverage=1 00:10:13.572 --rc genhtml_legend=1 00:10:13.572 --rc geninfo_all_blocks=1 00:10:13.572 --rc geninfo_unexecuted_blocks=1 00:10:13.572 00:10:13.572 ' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.572 --rc genhtml_branch_coverage=1 00:10:13.572 --rc genhtml_function_coverage=1 00:10:13.572 --rc genhtml_legend=1 00:10:13.572 --rc geninfo_all_blocks=1 00:10:13.572 --rc geninfo_unexecuted_blocks=1 00:10:13.572 00:10:13.572 ' 00:10:13.572 15:37:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:13.572 15:37:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72628 00:10:13.572 15:37:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72628 00:10:13.572 15:37:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 72628 ']' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.572 15:37:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:13.572 [2024-12-06 15:37:02.138388] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:13.572 [2024-12-06 15:37:02.138853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72628 ] 00:10:13.831 [2024-12-06 15:37:02.303454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.831 [2024-12-06 15:37:02.379917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.766 15:37:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.767 15:37:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:14.767 15:37:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:15.025 { 00:10:15.025 "version": "SPDK v25.01-pre git sha1 a5e6ecf28", 00:10:15.025 "fields": { 00:10:15.025 "major": 25, 00:10:15.025 "minor": 1, 00:10:15.025 "patch": 0, 00:10:15.025 "suffix": "-pre", 00:10:15.025 "commit": "a5e6ecf28" 00:10:15.025 } 00:10:15.025 } 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:15.025 15:37:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:15.025 15:37:03 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:15.283 request: 00:10:15.283 { 00:10:15.283 "method": "env_dpdk_get_mem_stats", 00:10:15.283 "req_id": 1 00:10:15.283 } 00:10:15.283 Got JSON-RPC error response 00:10:15.283 response: 00:10:15.283 { 00:10:15.283 "code": -32601, 00:10:15.283 "message": "Method not found" 00:10:15.283 } 00:10:15.283 15:37:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:15.283 15:37:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.283 15:37:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.283 15:37:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.283 15:37:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72628 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 72628 ']' 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 72628 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72628 00:10:15.284 killing process with pid 72628 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72628' 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 72628 00:10:15.284 15:37:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 72628 00:10:15.850 ************************************ 00:10:15.850 END TEST app_cmdline 00:10:15.850 ************************************ 00:10:15.850 00:10:15.850 real 0m2.637s 00:10:15.850 user 0m3.252s 00:10:15.850 sys 0m0.670s 00:10:15.850 15:37:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.850 15:37:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:15.850 15:37:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:15.850 15:37:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.850 15:37:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.850 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:10:15.850 ************************************ 00:10:15.850 START TEST version 00:10:15.850 ************************************ 00:10:15.850 15:37:04 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:16.109 * Looking for test storage... 00:10:16.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.109 15:37:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.109 15:37:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.109 15:37:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.109 15:37:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.109 15:37:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.109 15:37:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.109 15:37:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.109 15:37:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.109 15:37:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.109 15:37:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.109 15:37:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.109 15:37:04 version -- scripts/common.sh@344 -- # case "$op" in 00:10:16.109 15:37:04 version -- scripts/common.sh@345 -- # : 1 00:10:16.109 15:37:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.109 15:37:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.109 15:37:04 version -- scripts/common.sh@365 -- # decimal 1 00:10:16.109 15:37:04 version -- scripts/common.sh@353 -- # local d=1 00:10:16.109 15:37:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.109 15:37:04 version -- scripts/common.sh@355 -- # echo 1 00:10:16.109 15:37:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.109 15:37:04 version -- scripts/common.sh@366 -- # decimal 2 00:10:16.109 15:37:04 version -- scripts/common.sh@353 -- # local d=2 00:10:16.109 15:37:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.109 15:37:04 version -- scripts/common.sh@355 -- # echo 2 00:10:16.109 15:37:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.109 15:37:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.109 15:37:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.109 15:37:04 version -- scripts/common.sh@368 -- # return 0 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.109 --rc genhtml_branch_coverage=1 00:10:16.109 --rc genhtml_function_coverage=1 00:10:16.109 --rc genhtml_legend=1 00:10:16.109 --rc geninfo_all_blocks=1 00:10:16.109 --rc geninfo_unexecuted_blocks=1 00:10:16.109 00:10:16.109 ' 00:10:16.109 15:37:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.110 --rc geninfo_unexecuted_blocks=1 00:10:16.110 00:10:16.110 ' 00:10:16.110 15:37:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.110 --rc geninfo_unexecuted_blocks=1 00:10:16.110 00:10:16.110 ' 00:10:16.110 15:37:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.110 --rc geninfo_unexecuted_blocks=1 00:10:16.110 00:10:16.110 ' 00:10:16.110 15:37:04 version -- app/version.sh@17 -- # get_header_version major 00:10:16.110 15:37:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # cut -f2 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.110 15:37:04 version -- app/version.sh@17 -- # major=25 00:10:16.110 15:37:04 version -- app/version.sh@18 -- # get_header_version minor 00:10:16.110 15:37:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # cut -f2 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.110 15:37:04 version -- app/version.sh@18 -- # minor=1 00:10:16.110 15:37:04 version -- app/version.sh@19 -- # get_header_version patch 00:10:16.110 15:37:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # cut -f2 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.110 15:37:04 version -- app/version.sh@19 -- # patch=0 00:10:16.110 15:37:04 version -- app/version.sh@20 -- # get_header_version suffix 00:10:16.110 15:37:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # cut -f2 00:10:16.110 15:37:04 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.110 15:37:04 version -- app/version.sh@20 -- # suffix=-pre 00:10:16.110 15:37:04 version -- app/version.sh@22 -- # version=25.1 00:10:16.110 15:37:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:16.110 15:37:04 version -- app/version.sh@28 -- # version=25.1rc0 00:10:16.110 15:37:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:16.110 15:37:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:16.110 15:37:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:16.110 15:37:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:16.110 ************************************ 00:10:16.110 END TEST version 00:10:16.110 ************************************ 00:10:16.110 00:10:16.110 real 0m0.252s 00:10:16.110 user 0m0.164s 00:10:16.110 sys 0m0.129s 00:10:16.110 15:37:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.110 15:37:04 version -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 15:37:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:16.412 15:37:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:16.412 15:37:04 -- spdk/autotest.sh@194 -- # uname -s 00:10:16.412 15:37:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:16.412 15:37:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:16.412 15:37:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:16.412 15:37:04 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:16.412 15:37:04 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:16.412 15:37:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.412 15:37:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.412 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 ************************************ 00:10:16.412 START TEST blockdev_nvme 00:10:16.412 ************************************ 00:10:16.412 15:37:04 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:16.412 * Looking for test storage... 00:10:16.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:16.412 15:37:04 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.412 15:37:04 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.412 15:37:04 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.412 15:37:05 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.412 --rc genhtml_branch_coverage=1 00:10:16.412 --rc genhtml_function_coverage=1 00:10:16.412 --rc genhtml_legend=1 00:10:16.412 --rc geninfo_all_blocks=1 00:10:16.412 --rc geninfo_unexecuted_blocks=1 00:10:16.412 00:10:16.412 ' 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.412 --rc genhtml_branch_coverage=1 00:10:16.412 --rc genhtml_function_coverage=1 00:10:16.412 --rc genhtml_legend=1 00:10:16.412 --rc geninfo_all_blocks=1 00:10:16.412 --rc geninfo_unexecuted_blocks=1 00:10:16.412 00:10:16.412 ' 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.412 --rc genhtml_branch_coverage=1 00:10:16.412 --rc genhtml_function_coverage=1 00:10:16.412 --rc genhtml_legend=1 00:10:16.412 --rc geninfo_all_blocks=1 00:10:16.412 --rc geninfo_unexecuted_blocks=1 00:10:16.412 00:10:16.412 ' 00:10:16.412 15:37:05 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.412 --rc genhtml_branch_coverage=1 00:10:16.412 --rc genhtml_function_coverage=1 00:10:16.412 --rc genhtml_legend=1 00:10:16.412 --rc geninfo_all_blocks=1 00:10:16.412 --rc geninfo_unexecuted_blocks=1 00:10:16.412 00:10:16.412 ' 00:10:16.412 15:37:05 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:16.412 15:37:05 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:16.412 15:37:05 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:16.412 15:37:05 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.412 15:37:05 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72800 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:16.413 15:37:05 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72800 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 72800 ']' 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.413 15:37:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.670 [2024-12-06 15:37:05.190573] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:16.670 [2024-12-06 15:37:05.191112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72800 ] 00:10:16.670 [2024-12-06 15:37:05.355532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.929 [2024-12-06 15:37:05.415609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.186 15:37:05 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.186 15:37:05 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:17.186 15:37:05 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:17.186 15:37:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.186 15:37:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.442 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.442 15:37:06 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:17.700 15:37:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.700 15:37:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:17.700 15:37:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:17.701 15:37:06 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "89c18e7c-1206-4269-9070-27b9a63e549a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "89c18e7c-1206-4269-9070-27b9a63e549a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "dbe1c632-21bc-42e3-bb80-09f180198dd7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dbe1c632-21bc-42e3-bb80-09f180198dd7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f3acf338-da66-4aac-bc2a-379645faa9eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3acf338-da66-4aac-bc2a-379645faa9eb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "13fc2cc6-6546-4366-8b71-31bddce3ae67"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13fc2cc6-6546-4366-8b71-31bddce3ae67",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fbf650c5-d195-44fe-b204-af7fa14589cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fbf650c5-d195-44fe-b204-af7fa14589cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "134c813e-5bfd-47b2-b308-cfb1a3f3f650"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "134c813e-5bfd-47b2-b308-cfb1a3f3f650",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:17.701 15:37:06 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:17.701 15:37:06 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:17.701 15:37:06 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:17.701 15:37:06 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 72800 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 72800 ']' 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 72800 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72800 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.701 killing process with pid 72800 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72800' 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 72800 00:10:17.701 15:37:06 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 72800 00:10:18.267 15:37:06 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:18.267 15:37:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:18.267 15:37:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:18.267 15:37:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.267 15:37:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.267 ************************************ 00:10:18.267 START TEST bdev_hello_world 00:10:18.267 ************************************ 00:10:18.267 15:37:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:18.267 [2024-12-06 15:37:06.923940] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:18.267 [2024-12-06 15:37:06.924220] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72871 ] 00:10:18.525 [2024-12-06 15:37:07.083492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.525 [2024-12-06 15:37:07.162184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.091 [2024-12-06 15:37:07.621421] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:19.091 [2024-12-06 15:37:07.621507] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:19.091 [2024-12-06 15:37:07.621539] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:19.091 [2024-12-06 15:37:07.624422] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:19.091 [2024-12-06 15:37:07.625045] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:19.091 [2024-12-06 15:37:07.625097] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:19.091 [2024-12-06 15:37:07.625313] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:19.091 00:10:19.091 [2024-12-06 15:37:07.625362] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:19.349 00:10:19.349 real 0m1.097s 00:10:19.349 user 0m0.696s 00:10:19.349 sys 0m0.292s 00:10:19.349 15:37:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.349 15:37:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:19.349 ************************************ 00:10:19.349 END TEST bdev_hello_world 00:10:19.349 ************************************ 00:10:19.349 15:37:07 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:19.350 15:37:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.350 15:37:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.350 15:37:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.350 ************************************ 00:10:19.350 START TEST bdev_bounds 00:10:19.350 ************************************ 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72902 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:19.350 Process bdevio pid: 72902 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72902' 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72902 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72902 ']' 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.350 15:37:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:19.607 [2024-12-06 15:37:08.103765] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:19.607 [2024-12-06 15:37:08.104821] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72902 ] 00:10:19.607 [2024-12-06 15:37:08.278276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.865 [2024-12-06 15:37:08.325826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.865 [2024-12-06 15:37:08.325921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.865 [2024-12-06 15:37:08.326014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.801 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.801 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:20.801 15:37:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:20.801 I/O targets: 00:10:20.801 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:20.801 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:20.801 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.801 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.801 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.801 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:20.801 00:10:20.802 00:10:20.802 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.802 http://cunit.sourceforge.net/ 00:10:20.802 00:10:20.802 00:10:20.802 Suite: bdevio tests on: Nvme3n1 00:10:20.802 Test: blockdev write read block ...passed 00:10:20.802 Test: blockdev write zeroes read block ...passed 00:10:20.802 Test: blockdev write zeroes read no split ...passed 00:10:20.802 Test: blockdev write zeroes read split ...passed 00:10:20.802 Test: blockdev write zeroes read split partial ...passed 00:10:20.802 Test: blockdev reset ...[2024-12-06 15:37:09.282335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:20.802 [2024-12-06 15:37:09.285274] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:20.802 passed 00:10:20.802 Test: blockdev write read 8 blocks ...passed 00:10:20.802 Test: blockdev write read size > 128k ...passed 00:10:20.802 Test: blockdev write read invalid size ...passed 00:10:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.802 Test: blockdev write read max offset ...passed 00:10:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.802 Test: blockdev writev readv 8 blocks ...passed 00:10:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.802 Test: blockdev writev readv block ...passed 00:10:20.802 Test: blockdev writev readv size > 128k ...passed 00:10:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.802 Test: blockdev comparev and writev ...[2024-12-06 15:37:09.292770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf206000 len:0x1000 00:10:20.802 [2024-12-06 15:37:09.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme passthru rw ...passed 00:10:20.802 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:09.293868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.802 [2024-12-06 15:37:09.293951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme admin passthru ...passed 00:10:20.802 Test: blockdev copy ...passed 00:10:20.802 Suite: bdevio tests on: Nvme2n3 00:10:20.802 Test: blockdev write read block ...passed 00:10:20.802 Test: blockdev write zeroes read block ...passed 00:10:20.802 Test: blockdev write zeroes read no split ...passed 00:10:20.802 Test: blockdev write zeroes read split ...passed 00:10:20.802 Test: blockdev write zeroes read split partial ...passed 00:10:20.802 Test: blockdev reset ...[2024-12-06 15:37:09.321338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:20.802 [2024-12-06 15:37:09.324650] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:20.802 passed 00:10:20.802 Test: blockdev write read 8 blocks ...passed 00:10:20.802 Test: blockdev write read size > 128k ...passed 00:10:20.802 Test: blockdev write read invalid size ...passed 00:10:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.802 Test: blockdev write read max offset ...passed 00:10:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.802 Test: blockdev writev readv 8 blocks ...passed 00:10:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.802 Test: blockdev writev readv block ...passed 00:10:20.802 Test: blockdev writev readv size > 128k ...passed 00:10:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.802 Test: blockdev comparev and writev ...[2024-12-06 15:37:09.331922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba602000 len:0x1000 00:10:20.802 [2024-12-06 15:37:09.332007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme passthru rw ...passed 00:10:20.802 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:09.333065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.802 [2024-12-06 15:37:09.333110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme admin passthru ...passed 00:10:20.802 Test: blockdev copy ...passed 00:10:20.802 Suite: bdevio tests on: Nvme2n2 00:10:20.802 Test: blockdev write read block ...passed 00:10:20.802 Test: blockdev write zeroes read block ...passed 00:10:20.802 Test: blockdev write zeroes read no split ...passed 00:10:20.802 Test: blockdev write zeroes read split ...passed 00:10:20.802 Test: blockdev write zeroes read split partial ...passed 00:10:20.802 Test: blockdev reset ...[2024-12-06 15:37:09.360419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:20.802 [2024-12-06 15:37:09.363614] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:20.802 passed 00:10:20.802 Test: blockdev write read 8 blocks ...passed 00:10:20.802 Test: blockdev write read size > 128k ...passed 00:10:20.802 Test: blockdev write read invalid size ...passed 00:10:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.802 Test: blockdev write read max offset ...passed 00:10:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.802 Test: blockdev writev readv 8 blocks ...passed 00:10:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.802 Test: blockdev writev readv block ...passed 00:10:20.802 Test: blockdev writev readv size > 128k ...passed 00:10:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.802 Test: blockdev comparev and writev ...[2024-12-06 15:37:09.371272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf23b000 len:0x1000 00:10:20.802 [2024-12-06 15:37:09.371329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme passthru rw ...passed 00:10:20.802 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:09.372232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.802 [2024-12-06 15:37:09.372277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme admin passthru ...passed 00:10:20.802 Test: blockdev copy ...passed 00:10:20.802 Suite: bdevio tests on: Nvme2n1 00:10:20.802 Test: blockdev write read block ...passed 00:10:20.802 Test: blockdev write zeroes read block ...passed 00:10:20.802 Test: blockdev write zeroes read no split ...passed 00:10:20.802 Test: blockdev write zeroes read split ...passed 00:10:20.802 Test: blockdev write zeroes read split partial ...passed 00:10:20.802 Test: blockdev reset ...[2024-12-06 15:37:09.397283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:20.802 [2024-12-06 15:37:09.399696] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:20.802 passed 00:10:20.802 Test: blockdev write read 8 blocks ...passed 00:10:20.802 Test: blockdev write read size > 128k ...passed 00:10:20.802 Test: blockdev write read invalid size ...passed 00:10:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.802 Test: blockdev write read max offset ...passed 00:10:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.802 Test: blockdev writev readv 8 blocks ...passed 00:10:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.802 Test: blockdev writev readv block ...passed 00:10:20.802 Test: blockdev writev readv size > 128k ...passed 00:10:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.802 Test: blockdev comparev and writev ...[2024-12-06 15:37:09.407109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf237000 len:0x1000 00:10:20.802 [2024-12-06 15:37:09.407162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme passthru rw ...passed 00:10:20.802 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:09.407985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.802 [2024-12-06 15:37:09.408024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.802 passed 00:10:20.802 Test: blockdev nvme admin passthru ...passed 00:10:20.802 Test: blockdev copy ...passed 00:10:20.802 Suite: bdevio tests on: Nvme1n1 00:10:20.802 Test: blockdev write read block ...passed 00:10:20.802 Test: blockdev write zeroes read block ...passed 00:10:20.802 Test: blockdev write zeroes read no split ...passed 00:10:20.802 Test: blockdev write zeroes read split ...passed 00:10:20.802 Test: blockdev write zeroes read split partial ...passed 00:10:20.802 Test: blockdev reset ...[2024-12-06 15:37:09.445537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:20.802 [2024-12-06 15:37:09.447902] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:20.802 passed 00:10:20.802 Test: blockdev write read 8 blocks ...passed 00:10:20.802 Test: blockdev write read size > 128k ...passed 00:10:20.802 Test: blockdev write read invalid size ...passed 00:10:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.802 Test: blockdev write read max offset ...passed 00:10:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.802 Test: blockdev writev readv 8 blocks ...passed 00:10:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.802 Test: blockdev writev readv block ...passed 00:10:20.802 Test: blockdev writev readv size > 128k ...passed 00:10:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.802 Test: blockdev comparev and writev ...[2024-12-06 15:37:09.454774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf233000 len:0x1000 00:10:20.803 [2024-12-06 15:37:09.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.803 passed 00:10:20.803 Test: blockdev nvme passthru rw ...passed 00:10:20.803 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:09.455755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.803 [2024-12-06 15:37:09.455894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.803 passed 00:10:20.803 Test: blockdev nvme admin passthru ...passed 00:10:20.803 Test: blockdev copy ...passed 00:10:20.803 Suite: bdevio tests on: Nvme0n1 00:10:20.803 Test: blockdev write read block ...passed 00:10:20.803 Test: blockdev write zeroes read block ...passed 00:10:20.803 Test: blockdev write zeroes read no split ...passed 00:10:20.803 Test: blockdev write zeroes read split ...passed 00:10:20.803 Test: blockdev write zeroes read split partial ...passed 00:10:20.803 Test: blockdev reset ...[2024-12-06 15:37:09.482056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:20.803 passed 00:10:20.803 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:09.484726] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:20.803 passed 00:10:20.803 Test: blockdev write read size > 128k ...passed 00:10:20.803 Test: blockdev write read invalid size ...passed 00:10:20.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.803 Test: blockdev write read max offset ...passed 00:10:20.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.803 Test: blockdev writev readv 8 blocks ...passed 00:10:20.803 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.803 Test: blockdev writev readv block ...passed 00:10:20.803 Test: blockdev writev readv size > 128k ...passed 00:10:20.803 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.803 Test: blockdev comparev and writev ...passed 00:10:20.803 Test: blockdev nvme passthru rw ...[2024-12-06 15:37:09.490422] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:20.803 separate metadata which is not supported yet. 00:10:20.803 passed 00:10:20.803 Test: blockdev nvme passthru vendor specific ...passed 00:10:20.803 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:09.491077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:20.803 [2024-12-06 15:37:09.491126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:20.803 passed 00:10:21.061 Test: blockdev copy ...passed 00:10:21.061 00:10:21.061 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.061 suites 6 6 n/a 0 0 00:10:21.061 tests 138 138 138 0 0 00:10:21.061 asserts 893 893 893 0 n/a 00:10:21.061 00:10:21.061 Elapsed time = 0.536 seconds 00:10:21.061 0 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72902 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72902 ']' 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72902 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72902 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.061 killing process with pid 72902 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72902' 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72902 00:10:21.061 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72902 00:10:21.318 15:37:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:21.318 00:10:21.318 real 0m1.815s 00:10:21.319 user 0m4.549s 00:10:21.319 sys 0m0.452s 00:10:21.319 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.319 15:37:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:21.319 ************************************ 00:10:21.319 END TEST bdev_bounds 00:10:21.319 ************************************ 00:10:21.319 15:37:09 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:21.319 15:37:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.319 15:37:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.319 15:37:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.319 ************************************ 00:10:21.319 START TEST bdev_nbd 00:10:21.319 ************************************ 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72956 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72956 /var/tmp/spdk-nbd.sock 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.319 15:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:21.319 [2024-12-06 15:37:10.000198] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:21.319 [2024-12-06 15:37:10.000490] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.599 [2024-12-06 15:37:10.159792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.599 [2024-12-06 15:37:10.215501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.542 15:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.801 1+0 records in 00:10:22.801 1+0 records out 00:10:22.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550842 s, 7.4 MB/s 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.801 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.059 1+0 records in 00:10:23.059 1+0 records out 00:10:23.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535855 s, 7.6 MB/s 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.059 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.317 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.318 1+0 records in 00:10:23.318 1+0 records out 00:10:23.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902635 s, 4.5 MB/s 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.318 15:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.576 1+0 records in 00:10:23.576 1+0 records out 00:10:23.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000921458 s, 4.4 MB/s 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.576 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.834 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.093 1+0 records in 00:10:24.093 1+0 records out 00:10:24.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636286 s, 6.4 MB/s 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:24.093 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.351 1+0 records in 00:10:24.351 1+0 records out 00:10:24.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598191 s, 6.8 MB/s 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:24.351 15:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd0", 00:10:24.610 "bdev_name": "Nvme0n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd1", 00:10:24.610 "bdev_name": "Nvme1n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd2", 00:10:24.610 "bdev_name": "Nvme2n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd3", 00:10:24.610 "bdev_name": "Nvme2n2" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd4", 00:10:24.610 "bdev_name": "Nvme2n3" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd5", 00:10:24.610 "bdev_name": "Nvme3n1" 00:10:24.610 } 00:10:24.610 ]' 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd0", 00:10:24.610 "bdev_name": "Nvme0n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd1", 00:10:24.610 "bdev_name": "Nvme1n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd2", 00:10:24.610 "bdev_name": "Nvme2n1" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd3", 00:10:24.610 "bdev_name": "Nvme2n2" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd4", 00:10:24.610 "bdev_name": "Nvme2n3" 00:10:24.610 }, 00:10:24.610 { 00:10:24.610 "nbd_device": "/dev/nbd5", 00:10:24.610 "bdev_name": "Nvme3n1" 00:10:24.610 } 00:10:24.610 ]' 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.610 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.869 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:25.126 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:25.127 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:25.127 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:25.127 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.385 15:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.643 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.901 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.158 15:37:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.416 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:26.980 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:27.354 /dev/nbd0 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.354 1+0 records in 00:10:27.354 1+0 records out 00:10:27.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605879 s, 6.8 MB/s 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.354 15:37:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:27.613 /dev/nbd1 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.613 1+0 records in 00:10:27.613 1+0 records out 00:10:27.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688622 s, 5.9 MB/s 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.613 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.614 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.614 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.614 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.614 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:27.872 /dev/nbd10 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.872 1+0 records in 00:10:27.872 1+0 records out 00:10:27.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557469 s, 7.3 MB/s 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.872 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:28.440 /dev/nbd11 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.440 1+0 records in 00:10:28.440 1+0 records out 00:10:28.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730126 s, 5.6 MB/s 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:28.440 15:37:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:28.698 /dev/nbd12 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.698 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.698 1+0 records in 00:10:28.698 1+0 records out 00:10:28.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058427 s, 7.0 MB/s 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:28.699 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:28.957 /dev/nbd13 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.957 1+0 records in 00:10:28.957 1+0 records out 00:10:28.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871458 s, 4.7 MB/s 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.957 15:37:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.958 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.958 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:28.958 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.958 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.958 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.216 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd0", 00:10:29.216 "bdev_name": "Nvme0n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd1", 00:10:29.216 "bdev_name": "Nvme1n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd10", 00:10:29.216 "bdev_name": "Nvme2n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd11", 00:10:29.216 "bdev_name": "Nvme2n2" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd12", 00:10:29.216 "bdev_name": "Nvme2n3" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd13", 00:10:29.216 "bdev_name": "Nvme3n1" 00:10:29.216 } 00:10:29.216 ]' 00:10:29.216 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd0", 00:10:29.216 "bdev_name": "Nvme0n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd1", 00:10:29.216 "bdev_name": "Nvme1n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd10", 00:10:29.216 "bdev_name": "Nvme2n1" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd11", 00:10:29.216 "bdev_name": "Nvme2n2" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd12", 00:10:29.216 "bdev_name": "Nvme2n3" 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "nbd_device": "/dev/nbd13", 00:10:29.216 "bdev_name": "Nvme3n1" 00:10:29.216 } 00:10:29.216 ]' 00:10:29.216 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:29.474 /dev/nbd1 00:10:29.474 /dev/nbd10 00:10:29.474 /dev/nbd11 00:10:29.474 /dev/nbd12 00:10:29.474 /dev/nbd13' 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:29.474 /dev/nbd1 00:10:29.474 /dev/nbd10 00:10:29.474 /dev/nbd11 00:10:29.474 /dev/nbd12 00:10:29.474 /dev/nbd13' 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:29.474 256+0 records in 00:10:29.474 256+0 records out 00:10:29.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00800958 s, 131 MB/s 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.474 15:37:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:29.732 256+0 records in 00:10:29.732 256+0 records out 00:10:29.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183466 s, 5.7 MB/s 00:10:29.732 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.732 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:29.732 256+0 records in 00:10:29.732 256+0 records out 00:10:29.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168274 s, 6.2 MB/s 00:10:29.732 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.732 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:29.990 256+0 records in 00:10:29.990 256+0 records out 00:10:29.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155699 s, 6.7 MB/s 00:10:29.990 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.990 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:29.990 256+0 records in 00:10:29.990 256+0 records out 00:10:29.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118613 s, 8.8 MB/s 00:10:29.990 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.990 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:30.248 256+0 records in 00:10:30.248 256+0 records out 00:10:30.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131679 s, 8.0 MB/s 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:30.248 256+0 records in 00:10:30.248 256+0 records out 00:10:30.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132746 s, 7.9 MB/s 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.248 15:37:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.813 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.071 15:37:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.636 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.894 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.152 15:37:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.718 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:32.719 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.719 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.719 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:32.719 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.719 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:32.977 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:33.235 malloc_lvol_verify 00:10:33.235 15:37:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:33.493 f4345658-775a-442f-a43b-219860574344 00:10:33.493 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:33.752 82662e43-7c4d-456c-bf6a-6e409218bbf9 00:10:33.752 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:34.010 /dev/nbd0 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:34.010 mke2fs 1.47.0 (5-Feb-2023) 00:10:34.010 Discarding device blocks: 0/4096 done 00:10:34.010 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:34.010 00:10:34.010 Allocating group tables: 0/1 done 00:10:34.010 Writing inode tables: 0/1 done 00:10:34.010 Creating journal (1024 blocks): done 00:10:34.010 Writing superblocks and filesystem accounting information: 0/1 done 00:10:34.010 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.010 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:34.011 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.011 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72956 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72956 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.269 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:10:34.526 killing process with pid 72956 00:10:34.526 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.526 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.526 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:10:34.526 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72956 00:10:34.526 15:37:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72956 00:10:34.784 ************************************ 00:10:34.784 END TEST bdev_nbd 00:10:34.784 ************************************ 00:10:34.784 15:37:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:34.784 00:10:34.784 real 0m13.421s 00:10:34.784 user 0m19.881s 00:10:34.784 sys 0m4.493s 00:10:34.784 15:37:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.784 15:37:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 15:37:23 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:34.784 15:37:23 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:34.784 skipping fio tests on NVMe due to multi-ns failures. 00:10:34.784 15:37:23 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:34.784 15:37:23 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:34.784 15:37:23 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:34.784 15:37:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:34.784 15:37:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.784 15:37:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 ************************************ 00:10:34.784 START TEST bdev_verify 00:10:34.784 ************************************ 00:10:34.784 15:37:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:34.784 [2024-12-06 15:37:23.421333] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:34.784 [2024-12-06 15:37:23.421543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:10:35.042 [2024-12-06 15:37:23.585289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.043 [2024-12-06 15:37:23.644590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.043 [2024-12-06 15:37:23.644662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.608 Running I/O for 5 seconds... 00:10:37.910 17792.00 IOPS, 69.50 MiB/s [2024-12-06T15:37:27.537Z] 18081.50 IOPS, 70.63 MiB/s [2024-12-06T15:37:28.475Z] 18231.67 IOPS, 71.22 MiB/s [2024-12-06T15:37:29.413Z] 18772.25 IOPS, 73.33 MiB/s [2024-12-06T15:37:29.413Z] 19045.60 IOPS, 74.40 MiB/s 00:10:40.720 Latency(us) 00:10:40.720 [2024-12-06T15:37:29.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.720 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0xbd0bd 00:10:40.720 Nvme0n1 : 5.06 1530.09 5.98 0.00 0.00 83373.95 10783.65 96754.97 00:10:40.720 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:40.720 Nvme0n1 : 5.08 1612.85 6.30 0.00 0.00 78735.45 11200.70 72447.07 00:10:40.720 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0xa0000 00:10:40.720 Nvme1n1 : 5.06 1528.99 5.97 0.00 0.00 83205.95 25499.46 85792.58 00:10:40.720 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0xa0000 length 0xa0000 00:10:40.720 Nvme1n1 : 5.08 1612.39 6.30 0.00 0.00 78632.79 9472.93 74830.20 00:10:40.720 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0x80000 00:10:40.720 Nvme2n1 : 5.08 1526.98 5.96 0.00 0.00 82972.63 9353.77 78166.57 00:10:40.720 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x80000 length 0x80000 00:10:40.720 Nvme2n1 : 5.08 1611.25 6.29 0.00 0.00 78536.09 11617.75 77213.32 00:10:40.720 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0x80000 00:10:40.720 Nvme2n2 : 5.08 1536.65 6.00 0.00 0.00 82395.63 12630.57 71493.82 00:10:40.720 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x80000 length 0x80000 00:10:40.720 Nvme2n2 : 5.06 1605.79 6.27 0.00 0.00 79350.82 10187.87 73400.32 00:10:40.720 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0x80000 00:10:40.720 Nvme2n3 : 5.09 1538.48 6.01 0.00 0.00 82009.05 5600.35 75306.82 00:10:40.720 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x80000 length 0x80000 00:10:40.720 Nvme2n3 : 5.06 1605.22 6.27 0.00 0.00 79224.16 10604.92 69587.32 00:10:40.720 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x0 length 0x20000 00:10:40.720 Nvme3n1 : 5.09 1540.75 6.02 0.00 0.00 81739.02 5689.72 83886.08 00:10:40.720 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:40.720 Verification LBA range: start 0x20000 length 0x20000 00:10:40.720 Nvme3n1 : 5.08 1613.35 6.30 0.00 0.00 78881.73 11498.59 70063.94 00:10:40.720 [2024-12-06T15:37:29.413Z] =================================================================================================================== 00:10:40.720 [2024-12-06T15:37:29.413Z] Total : 18862.80 73.68 0.00 0.00 80707.91 5600.35 96754.97 00:10:41.285 00:10:41.285 real 0m6.455s 00:10:41.285 user 0m11.943s 00:10:41.285 sys 0m0.323s 00:10:41.285 15:37:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.285 15:37:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:41.285 ************************************ 00:10:41.285 END TEST bdev_verify 00:10:41.285 ************************************ 00:10:41.285 15:37:29 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:41.285 15:37:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:41.285 15:37:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.285 15:37:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.285 ************************************ 00:10:41.285 START TEST bdev_verify_big_io 00:10:41.285 ************************************ 00:10:41.285 15:37:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:41.285 [2024-12-06 15:37:29.959092] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:41.285 [2024-12-06 15:37:29.959367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73460 ] 00:10:41.544 [2024-12-06 15:37:30.125870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:41.544 [2024-12-06 15:37:30.185382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.544 [2024-12-06 15:37:30.185406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.111 Running I/O for 5 seconds... 00:10:47.196 1650.00 IOPS, 103.12 MiB/s [2024-12-06T15:37:36.823Z] 3032.50 IOPS, 189.53 MiB/s [2024-12-06T15:37:36.823Z] 3422.67 IOPS, 213.92 MiB/s 00:10:48.130 Latency(us) 00:10:48.130 [2024-12-06T15:37:36.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.130 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0x0 length 0xbd0b 00:10:48.130 Nvme0n1 : 5.59 148.88 9.30 0.00 0.00 828611.42 24188.74 926559.88 00:10:48.130 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:48.130 Nvme0n1 : 5.48 151.76 9.48 0.00 0.00 809027.10 16801.05 1060015.01 00:10:48.130 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0x0 length 0xa000 00:10:48.130 Nvme1n1 : 5.69 154.21 9.64 0.00 0.00 787528.59 39321.60 880803.84 00:10:48.130 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0xa000 length 0xa000 00:10:48.130 Nvme1n1 : 5.58 160.65 10.04 0.00 0.00 751516.26 89128.96 766413.73 00:10:48.130 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0x0 length 0x8000 00:10:48.130 Nvme2n1 : 5.69 153.81 9.61 0.00 0.00 767083.05 40989.79 884616.84 00:10:48.130 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0x8000 length 0x8000 00:10:48.130 Nvme2n1 : 5.66 155.63 9.73 0.00 0.00 751032.15 41704.73 1426063.36 00:10:48.130 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.130 Verification LBA range: start 0x0 length 0x8000 00:10:48.131 Nvme2n2 : 5.69 157.37 9.84 0.00 0.00 733219.64 53620.36 880803.84 00:10:48.131 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.131 Verification LBA range: start 0x8000 length 0x8000 00:10:48.131 Nvme2n2 : 5.71 161.34 10.08 0.00 0.00 706180.84 38368.35 1448941.38 00:10:48.131 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.131 Verification LBA range: start 0x0 length 0x8000 00:10:48.131 Nvme2n3 : 5.74 159.86 9.99 0.00 0.00 698837.79 45041.11 880803.84 00:10:48.131 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.131 Verification LBA range: start 0x8000 length 0x8000 00:10:48.131 Nvme2n3 : 5.77 174.71 10.92 0.00 0.00 638135.40 23116.33 1471819.40 00:10:48.131 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:48.131 Verification LBA range: start 0x0 length 0x2000 00:10:48.131 Nvme3n1 : 5.81 179.36 11.21 0.00 0.00 610316.02 770.79 876990.84 00:10:48.131 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:48.131 Verification LBA range: start 0x2000 length 0x2000 00:10:48.131 Nvme3n1 : 5.83 194.89 12.18 0.00 0.00 557537.19 997.93 1502323.43 00:10:48.131 [2024-12-06T15:37:36.824Z] =================================================================================================================== 00:10:48.131 [2024-12-06T15:37:36.824Z] Total : 1952.47 122.03 0.00 0.00 712689.34 770.79 1502323.43 00:10:48.695 00:10:48.695 real 0m7.520s 00:10:48.695 user 0m13.989s 00:10:48.695 sys 0m0.365s 00:10:48.695 15:37:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.695 15:37:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:48.695 ************************************ 00:10:48.695 END TEST bdev_verify_big_io 00:10:48.695 ************************************ 00:10:48.953 15:37:37 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.953 15:37:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:48.953 15:37:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.953 15:37:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.953 ************************************ 00:10:48.953 START TEST bdev_write_zeroes 00:10:48.953 ************************************ 00:10:48.953 15:37:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:48.953 [2024-12-06 15:37:37.518467] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:48.953 [2024-12-06 15:37:37.518687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73558 ] 00:10:49.210 [2024-12-06 15:37:37.678763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.211 [2024-12-06 15:37:37.744830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.775 Running I/O for 1 seconds... 00:10:50.729 52608.00 IOPS, 205.50 MiB/s 00:10:50.729 Latency(us) 00:10:50.729 [2024-12-06T15:37:39.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.729 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.729 Nvme0n1 : 1.02 8761.44 34.22 0.00 0.00 14560.59 11498.59 27644.28 00:10:50.729 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.729 Nvme1n1 : 1.02 8749.69 34.18 0.00 0.00 14556.33 11856.06 27525.12 00:10:50.729 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.729 Nvme2n1 : 1.03 8739.03 34.14 0.00 0.00 14516.54 11617.75 25022.84 00:10:50.729 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.729 Nvme2n2 : 1.03 8779.64 34.30 0.00 0.00 14403.00 7328.12 21805.61 00:10:50.729 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.729 Nvme2n3 : 1.03 8768.90 34.25 0.00 0.00 14378.84 7864.32 22282.24 00:10:50.729 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:50.730 Nvme3n1 : 1.03 8758.10 34.21 0.00 0.00 14360.50 8162.21 24188.74 00:10:50.730 [2024-12-06T15:37:39.423Z] =================================================================================================================== 00:10:50.730 [2024-12-06T15:37:39.423Z] Total : 52556.81 205.30 0.00 0.00 14462.34 7328.12 27644.28 00:10:50.987 00:10:50.987 real 0m2.179s 00:10:50.987 user 0m1.770s 00:10:50.987 sys 0m0.292s 00:10:50.987 15:37:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.987 ************************************ 00:10:50.987 15:37:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:50.987 END TEST bdev_write_zeroes 00:10:50.987 ************************************ 00:10:50.987 15:37:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:50.987 15:37:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:50.987 15:37:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.987 15:37:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.987 ************************************ 00:10:50.987 START TEST bdev_json_nonenclosed 00:10:50.987 ************************************ 00:10:50.987 15:37:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.246 [2024-12-06 15:37:39.767900] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:51.246 [2024-12-06 15:37:39.768116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73600 ] 00:10:51.246 [2024-12-06 15:37:39.934623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.505 [2024-12-06 15:37:39.990624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.505 [2024-12-06 15:37:39.990810] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:51.505 [2024-12-06 15:37:39.990855] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:51.505 [2024-12-06 15:37:39.990889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.505 00:10:51.505 real 0m0.446s 00:10:51.505 user 0m0.206s 00:10:51.505 sys 0m0.136s 00:10:51.505 15:37:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.505 ************************************ 00:10:51.505 END TEST bdev_json_nonenclosed 00:10:51.505 ************************************ 00:10:51.505 15:37:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:51.505 15:37:40 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.505 15:37:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:51.505 15:37:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.505 15:37:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:51.505 ************************************ 00:10:51.505 START TEST bdev_json_nonarray 00:10:51.505 ************************************ 00:10:51.505 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:51.762 [2024-12-06 15:37:40.291833] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:51.763 [2024-12-06 15:37:40.292146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:10:52.021 [2024-12-06 15:37:40.460081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.021 [2024-12-06 15:37:40.518257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.021 [2024-12-06 15:37:40.518422] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:52.021 [2024-12-06 15:37:40.518467] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:52.021 [2024-12-06 15:37:40.518485] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.021 00:10:52.021 real 0m0.522s 00:10:52.021 user 0m0.246s 00:10:52.021 sys 0m0.170s 00:10:52.021 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.021 15:37:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:52.021 ************************************ 00:10:52.021 END TEST bdev_json_nonarray 00:10:52.021 ************************************ 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:52.279 15:37:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:52.279 00:10:52.279 real 0m35.909s 00:10:52.279 user 0m55.289s 00:10:52.279 sys 0m7.499s 00:10:52.279 15:37:40 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.279 ************************************ 00:10:52.279 END TEST blockdev_nvme 00:10:52.279 15:37:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.279 ************************************ 00:10:52.279 15:37:40 -- spdk/autotest.sh@209 -- # uname -s 00:10:52.279 15:37:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:52.279 15:37:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:52.279 15:37:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.279 15:37:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.279 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:10:52.279 ************************************ 00:10:52.279 START TEST blockdev_nvme_gpt 00:10:52.279 ************************************ 00:10:52.279 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:52.279 * Looking for test storage... 00:10:52.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:52.279 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.279 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.279 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.279 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:52.279 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.538 15:37:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.538 --rc genhtml_branch_coverage=1 00:10:52.538 --rc genhtml_function_coverage=1 00:10:52.538 --rc genhtml_legend=1 00:10:52.538 --rc geninfo_all_blocks=1 00:10:52.538 --rc geninfo_unexecuted_blocks=1 00:10:52.538 00:10:52.538 ' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.538 --rc genhtml_branch_coverage=1 00:10:52.538 --rc genhtml_function_coverage=1 00:10:52.538 --rc genhtml_legend=1 00:10:52.538 --rc geninfo_all_blocks=1 00:10:52.538 --rc geninfo_unexecuted_blocks=1 00:10:52.538 00:10:52.538 ' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.538 --rc genhtml_branch_coverage=1 00:10:52.538 --rc genhtml_function_coverage=1 00:10:52.538 --rc genhtml_legend=1 00:10:52.538 --rc geninfo_all_blocks=1 00:10:52.538 --rc geninfo_unexecuted_blocks=1 00:10:52.538 00:10:52.538 ' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.538 --rc genhtml_branch_coverage=1 00:10:52.538 --rc genhtml_function_coverage=1 00:10:52.538 --rc genhtml_legend=1 00:10:52.538 --rc geninfo_all_blocks=1 00:10:52.538 --rc geninfo_unexecuted_blocks=1 00:10:52.538 00:10:52.538 ' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73706 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73706 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 73706 ']' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.538 15:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.538 [2024-12-06 15:37:41.118407] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:52.538 [2024-12-06 15:37:41.118610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73706 ] 00:10:52.795 [2024-12-06 15:37:41.280703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.795 [2024-12-06 15:37:41.346412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.728 15:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.728 15:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:53.728 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:53.728 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:53.728 15:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:53.985 Waiting for block devices as requested 00:10:53.985 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.282 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.282 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.568 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:59.568 BYT; 00:10:59.568 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:59.568 BYT; 00:10:59.568 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:59.568 15:37:47 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:59.568 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:59.568 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:59.568 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:59.568 15:37:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:59.568 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:59.568 15:37:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:00.504 The operation has completed successfully. 00:11:00.504 15:37:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:01.880 The operation has completed successfully. 00:11:01.880 15:37:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:02.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.706 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.706 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.706 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.706 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:02.966 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.966 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:02.966 [] 00:11:02.966 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:02.966 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:02.966 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.966 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.226 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.226 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:03.486 15:37:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.486 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:03.486 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:03.487 15:37:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e367bb0a-cb14-467f-9be3-f7fd5f0e601d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e367bb0a-cb14-467f-9be3-f7fd5f0e601d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1a65eb53-c0ad-43f8-b465-363d34eae32b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a65eb53-c0ad-43f8-b465-363d34eae32b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6b9f8297-aef6-4e54-b999-536b93b13c0b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6b9f8297-aef6-4e54-b999-536b93b13c0b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6b9df29c-efe4-4df3-9432-95faf2f8da84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6b9df29c-efe4-4df3-9432-95faf2f8da84",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7cb98351-08c5-42d0-a7aa-4f48db04c83b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7cb98351-08c5-42d0-a7aa-4f48db04c83b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:03.487 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:03.487 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:03.487 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:03.487 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 73706 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 73706 ']' 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 73706 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73706 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.487 killing process with pid 73706 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73706' 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 73706 00:11:03.487 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 73706 00:11:04.063 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:04.063 15:37:52 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:04.063 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:04.063 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.063 15:37:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.063 ************************************ 00:11:04.063 START TEST bdev_hello_world 00:11:04.063 ************************************ 00:11:04.063 15:37:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:04.322 [2024-12-06 15:37:52.800082] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:04.322 [2024-12-06 15:37:52.800321] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74327 ] 00:11:04.322 [2024-12-06 15:37:52.959741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.580 [2024-12-06 15:37:53.013406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.840 [2024-12-06 15:37:53.454770] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:04.840 [2024-12-06 15:37:53.454829] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:04.840 [2024-12-06 15:37:53.454858] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:04.840 [2024-12-06 15:37:53.457856] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:04.840 [2024-12-06 15:37:53.458443] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:04.840 [2024-12-06 15:37:53.458491] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:04.840 [2024-12-06 15:37:53.458714] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:04.840 00:11:04.840 [2024-12-06 15:37:53.458754] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:05.098 00:11:05.098 real 0m1.046s 00:11:05.098 user 0m0.667s 00:11:05.098 sys 0m0.273s 00:11:05.098 15:37:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.098 ************************************ 00:11:05.098 END TEST bdev_hello_world 00:11:05.098 ************************************ 00:11:05.098 15:37:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:05.098 15:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:05.098 15:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.099 15:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.099 15:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.356 ************************************ 00:11:05.356 START TEST bdev_bounds 00:11:05.356 ************************************ 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74358 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:05.356 Process bdevio pid: 74358 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74358' 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74358 00:11:05.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74358 ']' 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.356 15:37:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:05.356 [2024-12-06 15:37:53.899659] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:05.357 [2024-12-06 15:37:53.900218] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74358 ] 00:11:05.615 [2024-12-06 15:37:54.058481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.615 [2024-12-06 15:37:54.118261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.615 [2024-12-06 15:37:54.118380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.615 [2024-12-06 15:37:54.118433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.564 15:37:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.564 15:37:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:06.564 15:37:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:06.564 I/O targets: 00:11:06.564 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:06.564 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:06.564 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:06.564 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:06.564 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:06.564 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:06.564 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:06.564 00:11:06.564 00:11:06.564 CUnit - A unit testing framework for C - Version 2.1-3 00:11:06.564 http://cunit.sourceforge.net/ 00:11:06.564 00:11:06.564 00:11:06.564 Suite: bdevio tests on: Nvme3n1 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.564 Test: blockdev write zeroes read split partial ...passed 00:11:06.564 Test: blockdev reset ...[2024-12-06 15:37:55.043283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:06.564 passed 00:11:06.564 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:55.045884] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:06.564 passed 00:11:06.564 Test: blockdev write read size > 128k ...passed 00:11:06.564 Test: blockdev write read invalid size ...passed 00:11:06.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.564 Test: blockdev write read max offset ...passed 00:11:06.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.564 Test: blockdev writev readv 8 blocks ...passed 00:11:06.564 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.564 Test: blockdev writev readv block ...passed 00:11:06.564 Test: blockdev writev readv size > 128k ...passed 00:11:06.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.564 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.053437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b940e000 len:0x1000 00:11:06.564 [2024-12-06 15:37:55.053514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme passthru rw ...passed 00:11:06.564 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.564 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:55.054497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:06.564 [2024-12-06 15:37:55.054544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev copy ...passed 00:11:06.564 Suite: bdevio tests on: Nvme2n3 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.564 Test: blockdev write zeroes read split partial ...passed 00:11:06.564 Test: blockdev reset ...[2024-12-06 15:37:55.079803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:06.564 passed 00:11:06.564 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:55.082599] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:06.564 passed 00:11:06.564 Test: blockdev write read size > 128k ...passed 00:11:06.564 Test: blockdev write read invalid size ...passed 00:11:06.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.564 Test: blockdev write read max offset ...passed 00:11:06.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.564 Test: blockdev writev readv 8 blocks ...passed 00:11:06.564 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.564 Test: blockdev writev readv block ...passed 00:11:06.564 Test: blockdev writev readv size > 128k ...passed 00:11:06.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.564 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.089863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9408000 len:0x1000 00:11:06.564 [2024-12-06 15:37:55.089919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme passthru rw ...passed 00:11:06.564 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.564 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:55.090897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:06.564 [2024-12-06 15:37:55.090963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev copy ...passed 00:11:06.564 Suite: bdevio tests on: Nvme2n2 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.564 Test: blockdev write zeroes read split partial ...passed 00:11:06.564 Test: blockdev reset ...[2024-12-06 15:37:55.116414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:06.564 [2024-12-06 15:37:55.119045] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:06.564 passed 00:11:06.564 Test: blockdev write read 8 blocks ...passed 00:11:06.564 Test: blockdev write read size > 128k ...passed 00:11:06.564 Test: blockdev write read invalid size ...passed 00:11:06.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.564 Test: blockdev write read max offset ...passed 00:11:06.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.564 Test: blockdev writev readv 8 blocks ...passed 00:11:06.564 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.564 Test: blockdev writev readv block ...passed 00:11:06.564 Test: blockdev writev readv size > 128k ...passed 00:11:06.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.564 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.126862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9402000 len:0x1000 00:11:06.564 [2024-12-06 15:37:55.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme passthru rw ...passed 00:11:06.564 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:55.127766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:06.564 [2024-12-06 15:37:55.127806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme admin passthru ...passed 00:11:06.564 Test: blockdev copy ...passed 00:11:06.564 Suite: bdevio tests on: Nvme2n1 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.564 Test: blockdev write zeroes read split partial ...passed 00:11:06.564 Test: blockdev reset ...[2024-12-06 15:37:55.152885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:06.564 [2024-12-06 15:37:55.155589] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:11:06.564 Test: blockdev write read 8 blocks ...passed 00:11:06.564 Test: blockdev write read size > 128k ...uccessful. 00:11:06.564 passed 00:11:06.564 Test: blockdev write read invalid size ...passed 00:11:06.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.564 Test: blockdev write read max offset ...passed 00:11:06.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.564 Test: blockdev writev readv 8 blocks ...passed 00:11:06.564 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.564 Test: blockdev writev readv block ...passed 00:11:06.564 Test: blockdev writev readv size > 128k ...passed 00:11:06.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.564 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.162627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7604000 len:0x1000 00:11:06.564 [2024-12-06 15:37:55.162682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme passthru rw ...passed 00:11:06.564 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:37:55.163542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:06.564 [2024-12-06 15:37:55.163582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Test: blockdev nvme admin passthru ...passed 00:11:06.564 Test: blockdev copy ...passed 00:11:06.564 Suite: bdevio tests on: Nvme1n1p2 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.564 Test: blockdev write zeroes read split partial ...passed 00:11:06.564 Test: blockdev reset ...[2024-12-06 15:37:55.191018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:06.564 passed 00:11:06.564 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:55.193540] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:06.564 passed 00:11:06.564 Test: blockdev write read size > 128k ...passed 00:11:06.564 Test: blockdev write read invalid size ...passed 00:11:06.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.564 Test: blockdev write read max offset ...passed 00:11:06.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.564 Test: blockdev writev readv 8 blocks ...passed 00:11:06.564 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.564 Test: blockdev writev readv block ...passed 00:11:06.564 Test: blockdev writev readv size > 128k ...passed 00:11:06.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.564 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.200469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:11:06.564 Test: blockdev nvme passthru rw ...passed 00:11:06.564 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.564 Test: blockdev nvme admin passthru ...passed 00:11:06.564 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2cd63d000 len:0x1000 00:11:06.564 [2024-12-06 15:37:55.200655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.564 passed 00:11:06.564 Suite: bdevio tests on: Nvme1n1p1 00:11:06.564 Test: blockdev write read block ...passed 00:11:06.564 Test: blockdev write zeroes read block ...passed 00:11:06.564 Test: blockdev write zeroes read no split ...passed 00:11:06.564 Test: blockdev write zeroes read split ...passed 00:11:06.565 Test: blockdev write zeroes read split partial ...passed 00:11:06.565 Test: blockdev reset ...[2024-12-06 15:37:55.215441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:06.565 passed 00:11:06.565 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:55.217507] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:06.565 passed 00:11:06.565 Test: blockdev write read size > 128k ...passed 00:11:06.565 Test: blockdev write read invalid size ...passed 00:11:06.565 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.565 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.565 Test: blockdev write read max offset ...passed 00:11:06.565 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.565 Test: blockdev writev readv 8 blocks ...passed 00:11:06.565 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.565 Test: blockdev writev readv block ...passed 00:11:06.565 Test: blockdev writev readv size > 128k ...passed 00:11:06.565 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.565 Test: blockdev comparev and writev ...[2024-12-06 15:37:55.224461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cd639000 len:0x1000 00:11:06.565 [2024-12-06 15:37:55.224516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:06.565 passed 00:11:06.565 Test: blockdev nvme passthru rw ...passed 00:11:06.565 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.565 Test: blockdev nvme admin passthru ...passed 00:11:06.565 Test: blockdev copy ...passed 00:11:06.565 Suite: bdevio tests on: Nvme0n1 00:11:06.565 Test: blockdev write read block ...passed 00:11:06.565 Test: blockdev write zeroes read block ...passed 00:11:06.565 Test: blockdev write zeroes read no split ...passed 00:11:06.565 Test: blockdev write zeroes read split ...passed 00:11:06.565 Test: blockdev write zeroes read split partial ...passed 00:11:06.565 Test: blockdev reset ...[2024-12-06 15:37:55.239602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:06.565 passed 00:11:06.565 Test: blockdev write read 8 blocks ...[2024-12-06 15:37:55.241794] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:06.565 passed 00:11:06.565 Test: blockdev write read size > 128k ...passed 00:11:06.565 Test: blockdev write read invalid size ...passed 00:11:06.565 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.565 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.565 Test: blockdev write read max offset ...passed 00:11:06.565 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.565 Test: blockdev writev readv 8 blocks ...passed 00:11:06.565 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.565 Test: blockdev writev readv block ...passed 00:11:06.565 Test: blockdev writev readv size > 128k ...passed 00:11:06.565 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.565 Test: blockdev comparev and writev ...passed 00:11:06.565 Test: blockdev nvme passthru rw ...[2024-12-06 15:37:55.247797] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:06.565 separate metadata which is not supported yet. 00:11:06.565 passed 00:11:06.565 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.565 Test: blockdev nvme admin passthru ...[2024-12-06 15:37:55.248427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:06.565 [2024-12-06 15:37:55.248482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:06.565 passed 00:11:06.565 Test: blockdev copy ...passed 00:11:06.565 00:11:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:11:06.565 suites 7 7 n/a 0 0 00:11:06.565 tests 161 161 161 0 0 00:11:06.565 asserts 1025 1025 1025 0 n/a 00:11:06.565 00:11:06.565 Elapsed time = 0.512 seconds 00:11:06.841 0 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74358 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74358 ']' 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74358 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74358 00:11:06.841 killing process with pid 74358 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74358' 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74358 00:11:06.841 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74358 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:07.099 00:11:07.099 real 0m1.789s 00:11:07.099 user 0m4.492s 00:11:07.099 sys 0m0.461s 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:07.099 ************************************ 00:11:07.099 END TEST bdev_bounds 00:11:07.099 ************************************ 00:11:07.099 15:37:55 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:07.099 15:37:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.099 15:37:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.099 15:37:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:07.099 ************************************ 00:11:07.099 START TEST bdev_nbd 00:11:07.099 ************************************ 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:07.099 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74412 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74412 /var/tmp/spdk-nbd.sock 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74412 ']' 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.100 15:37:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 [2024-12-06 15:37:55.727486] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:07.100 [2024-12-06 15:37:55.727912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.358 [2024-12-06 15:37:55.883429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.358 [2024-12-06 15:37:55.932734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.293 15:37:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.551 1+0 records in 00:11:08.551 1+0 records out 00:11:08.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533825 s, 7.7 MB/s 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.551 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.824 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.824 1+0 records in 00:11:08.824 1+0 records out 00:11:08.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431184 s, 9.5 MB/s 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:09.106 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.364 1+0 records in 00:11:09.364 1+0 records out 00:11:09.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546399 s, 7.5 MB/s 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:09.364 15:37:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:09.622 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.623 1+0 records in 00:11:09.623 1+0 records out 00:11:09.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529448 s, 7.7 MB/s 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:09.623 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.188 1+0 records in 00:11:10.188 1+0 records out 00:11:10.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554104 s, 7.4 MB/s 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:10.188 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.459 1+0 records in 00:11:10.459 1+0 records out 00:11:10.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629306 s, 6.5 MB/s 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:10.459 15:37:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.716 1+0 records in 00:11:10.716 1+0 records out 00:11:10.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726006 s, 5.6 MB/s 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:10.716 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:10.974 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd0", 00:11:10.974 "bdev_name": "Nvme0n1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd1", 00:11:10.974 "bdev_name": "Nvme1n1p1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd2", 00:11:10.974 "bdev_name": "Nvme1n1p2" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd3", 00:11:10.974 "bdev_name": "Nvme2n1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd4", 00:11:10.974 "bdev_name": "Nvme2n2" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd5", 00:11:10.974 "bdev_name": "Nvme2n3" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd6", 00:11:10.974 "bdev_name": "Nvme3n1" 00:11:10.974 } 00:11:10.974 ]' 00:11:10.974 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:10.974 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd0", 00:11:10.974 "bdev_name": "Nvme0n1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd1", 00:11:10.974 "bdev_name": "Nvme1n1p1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd2", 00:11:10.974 "bdev_name": "Nvme1n1p2" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd3", 00:11:10.974 "bdev_name": "Nvme2n1" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd4", 00:11:10.974 "bdev_name": "Nvme2n2" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd5", 00:11:10.974 "bdev_name": "Nvme2n3" 00:11:10.974 }, 00:11:10.974 { 00:11:10.974 "nbd_device": "/dev/nbd6", 00:11:10.974 "bdev_name": "Nvme3n1" 00:11:10.974 } 00:11:10.974 ]' 00:11:10.974 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:11.231 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.232 15:37:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:11.488 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:11.488 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.489 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.746 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.308 15:38:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.565 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.821 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.079 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.337 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.338 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.338 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.338 15:38:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:13.905 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:14.163 /dev/nbd0 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.163 1+0 records in 00:11:14.163 1+0 records out 00:11:14.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523432 s, 7.8 MB/s 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:14.163 15:38:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:14.422 /dev/nbd1 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.422 1+0 records in 00:11:14.422 1+0 records out 00:11:14.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052881 s, 7.7 MB/s 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:14.422 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:14.680 /dev/nbd10 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.680 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.940 1+0 records in 00:11:14.940 1+0 records out 00:11:14.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730138 s, 5.6 MB/s 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:14.940 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:15.200 /dev/nbd11 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.200 1+0 records in 00:11:15.200 1+0 records out 00:11:15.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600256 s, 6.8 MB/s 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:15.200 15:38:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:15.460 /dev/nbd12 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.460 1+0 records in 00:11:15.460 1+0 records out 00:11:15.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710518 s, 5.8 MB/s 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:15.460 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:15.719 /dev/nbd13 00:11:15.719 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:15.719 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:15.719 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:15.720 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.979 1+0 records in 00:11:15.979 1+0 records out 00:11:15.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093094 s, 4.4 MB/s 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:15.979 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:15.979 /dev/nbd14 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.238 1+0 records in 00:11:16.238 1+0 records out 00:11:16.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568857 s, 7.2 MB/s 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.238 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.498 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd0", 00:11:16.498 "bdev_name": "Nvme0n1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd1", 00:11:16.498 "bdev_name": "Nvme1n1p1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd10", 00:11:16.498 "bdev_name": "Nvme1n1p2" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd11", 00:11:16.498 "bdev_name": "Nvme2n1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd12", 00:11:16.498 "bdev_name": "Nvme2n2" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd13", 00:11:16.498 "bdev_name": "Nvme2n3" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd14", 00:11:16.498 "bdev_name": "Nvme3n1" 00:11:16.498 } 00:11:16.498 ]' 00:11:16.498 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd0", 00:11:16.498 "bdev_name": "Nvme0n1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd1", 00:11:16.498 "bdev_name": "Nvme1n1p1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd10", 00:11:16.498 "bdev_name": "Nvme1n1p2" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd11", 00:11:16.498 "bdev_name": "Nvme2n1" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd12", 00:11:16.498 "bdev_name": "Nvme2n2" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd13", 00:11:16.498 "bdev_name": "Nvme2n3" 00:11:16.498 }, 00:11:16.498 { 00:11:16.498 "nbd_device": "/dev/nbd14", 00:11:16.498 "bdev_name": "Nvme3n1" 00:11:16.498 } 00:11:16.498 ]' 00:11:16.498 15:38:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:16.498 /dev/nbd1 00:11:16.498 /dev/nbd10 00:11:16.498 /dev/nbd11 00:11:16.498 /dev/nbd12 00:11:16.498 /dev/nbd13 00:11:16.498 /dev/nbd14' 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:16.498 /dev/nbd1 00:11:16.498 /dev/nbd10 00:11:16.498 /dev/nbd11 00:11:16.498 /dev/nbd12 00:11:16.498 /dev/nbd13 00:11:16.498 /dev/nbd14' 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:16.498 256+0 records in 00:11:16.498 256+0 records out 00:11:16.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102519 s, 102 MB/s 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:16.498 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:16.756 256+0 records in 00:11:16.756 256+0 records out 00:11:16.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173973 s, 6.0 MB/s 00:11:16.756 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:16.756 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:16.756 256+0 records in 00:11:16.756 256+0 records out 00:11:16.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173877 s, 6.0 MB/s 00:11:16.756 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:16.756 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:17.013 256+0 records in 00:11:17.013 256+0 records out 00:11:17.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182535 s, 5.7 MB/s 00:11:17.013 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:17.013 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:17.270 256+0 records in 00:11:17.270 256+0 records out 00:11:17.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161088 s, 6.5 MB/s 00:11:17.270 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:17.270 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:17.270 256+0 records in 00:11:17.270 256+0 records out 00:11:17.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183608 s, 5.7 MB/s 00:11:17.270 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:17.270 15:38:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:17.527 256+0 records in 00:11:17.527 256+0 records out 00:11:17.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167066 s, 6.3 MB/s 00:11:17.527 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:17.527 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:17.784 256+0 records in 00:11:17.784 256+0 records out 00:11:17.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135466 s, 7.7 MB/s 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.784 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.040 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.298 15:38:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.557 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:18.815 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.073 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.331 15:38:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.590 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.850 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:20.109 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:20.368 malloc_lvol_verify 00:11:20.368 15:38:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:20.628 22384b7d-0a52-4a34-b199-6aea8526081b 00:11:20.628 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:20.887 dda9e056-2184-4ff5-b4c4-b3e8af3588ae 00:11:20.887 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:21.146 /dev/nbd0 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:21.146 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.146 Discarding device blocks: 0/4096 done 00:11:21.146 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:21.146 00:11:21.146 Allocating group tables: 0/1 done 00:11:21.146 Writing inode tables: 0/1 done 00:11:21.146 Creating journal (1024 blocks): done 00:11:21.146 Writing superblocks and filesystem accounting information: 0/1 done 00:11:21.146 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.146 15:38:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74412 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74412 ']' 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74412 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74412 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.422 killing process with pid 74412 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74412' 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74412 00:11:21.422 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74412 00:11:21.988 15:38:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:21.988 00:11:21.988 real 0m14.734s 00:11:21.988 user 0m21.614s 00:11:21.988 sys 0m5.051s 00:11:21.988 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.988 15:38:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:21.988 ************************************ 00:11:21.988 END TEST bdev_nbd 00:11:21.988 ************************************ 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:21.988 skipping fio tests on NVMe due to multi-ns failures. 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:21.988 15:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:21.988 15:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:21.988 15:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.988 15:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:21.988 ************************************ 00:11:21.988 START TEST bdev_verify 00:11:21.988 ************************************ 00:11:21.988 15:38:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:21.988 [2024-12-06 15:38:10.513555] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:21.988 [2024-12-06 15:38:10.513757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74860 ] 00:11:21.988 [2024-12-06 15:38:10.666942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:22.248 [2024-12-06 15:38:10.731824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.248 [2024-12-06 15:38:10.731893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.815 Running I/O for 5 seconds... 00:11:25.134 17216.00 IOPS, 67.25 MiB/s [2024-12-06T15:38:14.780Z] 17920.00 IOPS, 70.00 MiB/s [2024-12-06T15:38:15.739Z] 18197.33 IOPS, 71.08 MiB/s [2024-12-06T15:38:16.670Z] 18064.00 IOPS, 70.56 MiB/s [2024-12-06T15:38:16.670Z] 18137.60 IOPS, 70.85 MiB/s 00:11:27.977 Latency(us) 00:11:27.977 [2024-12-06T15:38:16.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.977 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0xbd0bd 00:11:27.977 Nvme0n1 : 5.06 1266.01 4.95 0.00 0.00 100603.50 23235.49 110100.48 00:11:27.977 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:27.977 Nvme0n1 : 5.06 1265.83 4.94 0.00 0.00 100739.83 24069.59 95801.72 00:11:27.977 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0x4ff80 00:11:27.977 Nvme1n1p1 : 5.09 1269.72 4.96 0.00 0.00 99985.92 12690.15 99614.72 00:11:27.977 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:27.977 Nvme1n1p1 : 5.06 1265.22 4.94 0.00 0.00 100573.31 24665.37 89605.59 00:11:27.977 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0x4ff7f 00:11:27.977 Nvme1n1p2 : 5.09 1269.11 4.96 0.00 0.00 99784.80 12809.31 91512.09 00:11:27.977 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:27.977 Nvme1n1p2 : 5.06 1264.79 4.94 0.00 0.00 100394.69 22997.18 84362.71 00:11:27.977 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0x80000 00:11:27.977 Nvme2n1 : 5.11 1278.29 4.99 0.00 0.00 99185.01 10247.45 83409.45 00:11:27.977 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x80000 length 0x80000 00:11:27.977 Nvme2n1 : 5.09 1270.92 4.96 0.00 0.00 99706.91 7030.23 78166.57 00:11:27.977 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0x80000 00:11:27.977 Nvme2n2 : 5.11 1277.90 4.99 0.00 0.00 98991.07 10247.45 83409.45 00:11:27.977 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x80000 length 0x80000 00:11:27.977 Nvme2n2 : 5.10 1280.17 5.00 0.00 0.00 98966.56 9592.09 80073.08 00:11:27.977 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.977 Verification LBA range: start 0x0 length 0x80000 00:11:27.977 Nvme2n3 : 5.11 1277.51 4.99 0.00 0.00 98793.18 10187.87 86269.21 00:11:27.978 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.978 Verification LBA range: start 0x80000 length 0x80000 00:11:27.978 Nvme2n3 : 5.10 1279.78 5.00 0.00 0.00 98784.94 9830.40 84362.71 00:11:27.978 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:27.978 Verification LBA range: start 0x0 length 0x20000 00:11:27.978 Nvme3n1 : 5.11 1277.15 4.99 0.00 0.00 98614.84 10366.60 88175.71 00:11:27.978 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:27.978 Verification LBA range: start 0x20000 length 0x20000 00:11:27.978 Nvme3n1 : 5.10 1279.41 5.00 0.00 0.00 98591.03 10128.29 89128.96 00:11:27.978 [2024-12-06T15:38:16.671Z] =================================================================================================================== 00:11:27.978 [2024-12-06T15:38:16.671Z] Total : 17821.79 69.62 0.00 0.00 99544.74 7030.23 110100.48 00:11:28.542 00:11:28.542 real 0m6.503s 00:11:28.542 user 0m12.065s 00:11:28.542 sys 0m0.307s 00:11:28.542 15:38:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.542 15:38:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:28.542 ************************************ 00:11:28.542 END TEST bdev_verify 00:11:28.542 ************************************ 00:11:28.542 15:38:16 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:28.542 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:28.542 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.542 15:38:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:28.542 ************************************ 00:11:28.542 START TEST bdev_verify_big_io 00:11:28.542 ************************************ 00:11:28.542 15:38:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:28.542 [2024-12-06 15:38:17.113524] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:28.542 [2024-12-06 15:38:17.113748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74953 ] 00:11:28.800 [2024-12-06 15:38:17.275459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.800 [2024-12-06 15:38:17.342753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.800 [2024-12-06 15:38:17.342846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.366 Running I/O for 5 seconds... 00:11:33.587 961.00 IOPS, 60.06 MiB/s [2024-12-06T15:38:24.182Z] 2016.00 IOPS, 126.00 MiB/s [2024-12-06T15:38:24.182Z] 2412.67 IOPS, 150.79 MiB/s 00:11:35.489 Latency(us) 00:11:35.489 [2024-12-06T15:38:24.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.489 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x0 length 0xbd0b 00:11:35.489 Nvme0n1 : 5.81 104.84 6.55 0.00 0.00 1162379.43 21090.68 1189657.13 00:11:35.489 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:35.489 Nvme0n1 : 5.84 109.54 6.85 0.00 0.00 1120860.44 28597.53 1182031.13 00:11:35.489 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x0 length 0x4ff8 00:11:35.489 Nvme1n1p1 : 5.81 110.12 6.88 0.00 0.00 1088297.15 99138.09 1006632.96 00:11:35.489 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:35.489 Nvme1n1p1 : 5.84 110.20 6.89 0.00 0.00 1075869.84 95801.72 1113397.06 00:11:35.489 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x0 length 0x4ff7 00:11:35.489 Nvme1n1p2 : 5.87 112.53 7.03 0.00 0.00 1035241.12 59339.87 1006632.96 00:11:35.489 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:35.489 Nvme1n1p2 : 5.85 113.75 7.11 0.00 0.00 1025849.21 101044.60 1121023.07 00:11:35.489 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x0 length 0x8000 00:11:35.489 Nvme2n1 : 5.94 118.56 7.41 0.00 0.00 961718.63 60293.12 1014258.97 00:11:35.489 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x8000 length 0x8000 00:11:35.489 Nvme2n1 : 5.92 118.51 7.41 0.00 0.00 961822.01 64821.06 1128649.08 00:11:35.489 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x0 length 0x8000 00:11:35.489 Nvme2n2 : 5.97 110.86 6.93 0.00 0.00 993655.36 61484.68 2013265.92 00:11:35.489 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.489 Verification LBA range: start 0x8000 length 0x8000 00:11:35.489 Nvme2n2 : 5.98 123.79 7.74 0.00 0.00 894811.64 31218.97 1143901.09 00:11:35.490 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.490 Verification LBA range: start 0x0 length 0x8000 00:11:35.490 Nvme2n3 : 6.00 119.78 7.49 0.00 0.00 897108.37 27286.81 2043769.95 00:11:35.490 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.490 Verification LBA range: start 0x8000 length 0x8000 00:11:35.490 Nvme2n3 : 5.98 128.42 8.03 0.00 0.00 840666.84 27167.65 1159153.11 00:11:35.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.490 Verification LBA range: start 0x0 length 0x2000 00:11:35.490 Nvme3n1 : 6.07 139.64 8.73 0.00 0.00 748366.55 1802.24 2074273.98 00:11:35.490 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.490 Verification LBA range: start 0x2000 length 0x2000 00:11:35.490 Nvme3n1 : 6.05 148.23 9.26 0.00 0.00 708690.59 1102.20 1159153.11 00:11:35.490 [2024-12-06T15:38:24.183Z] =================================================================================================================== 00:11:35.490 [2024-12-06T15:38:24.183Z] Total : 1668.77 104.30 0.00 0.00 951051.51 1102.20 2074273.98 00:11:36.056 00:11:36.056 real 0m7.700s 00:11:36.056 user 0m14.351s 00:11:36.056 sys 0m0.376s 00:11:36.056 15:38:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.056 15:38:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.056 ************************************ 00:11:36.056 END TEST bdev_verify_big_io 00:11:36.056 ************************************ 00:11:36.056 15:38:24 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.056 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:36.056 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.056 15:38:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:36.314 ************************************ 00:11:36.314 START TEST bdev_write_zeroes 00:11:36.314 ************************************ 00:11:36.314 15:38:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.314 [2024-12-06 15:38:24.846639] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:36.314 [2024-12-06 15:38:24.846807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75056 ] 00:11:36.314 [2024-12-06 15:38:24.998971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.571 [2024-12-06 15:38:25.050013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.135 Running I/O for 1 seconds... 00:11:38.066 58688.00 IOPS, 229.25 MiB/s 00:11:38.066 Latency(us) 00:11:38.066 [2024-12-06T15:38:26.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.066 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme0n1 : 1.03 8310.04 32.46 0.00 0.00 15361.42 13166.78 32648.84 00:11:38.066 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme1n1p1 : 1.03 8299.29 32.42 0.00 0.00 15350.03 13405.09 32887.16 00:11:38.066 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme1n1p2 : 1.03 8288.45 32.38 0.00 0.00 15319.54 12809.31 30504.03 00:11:38.066 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme2n1 : 1.04 8278.90 32.34 0.00 0.00 15266.16 13226.36 27525.12 00:11:38.066 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme2n2 : 1.04 8269.48 32.30 0.00 0.00 15259.64 13047.62 27405.96 00:11:38.066 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme2n3 : 1.04 8259.68 32.26 0.00 0.00 15209.84 12988.04 27763.43 00:11:38.066 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:38.066 Nvme3n1 : 1.04 8249.52 32.22 0.00 0.00 15184.97 11319.85 29669.93 00:11:38.066 [2024-12-06T15:38:26.759Z] =================================================================================================================== 00:11:38.066 [2024-12-06T15:38:26.759Z] Total : 57955.35 226.39 0.00 0.00 15278.80 11319.85 32887.16 00:11:38.325 00:11:38.325 real 0m2.161s 00:11:38.325 user 0m1.749s 00:11:38.325 sys 0m0.295s 00:11:38.325 15:38:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.325 15:38:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:38.325 ************************************ 00:11:38.325 END TEST bdev_write_zeroes 00:11:38.325 ************************************ 00:11:38.325 15:38:26 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:38.325 15:38:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:38.325 15:38:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.325 15:38:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:38.325 ************************************ 00:11:38.325 START TEST bdev_json_nonenclosed 00:11:38.325 ************************************ 00:11:38.325 15:38:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:38.583 [2024-12-06 15:38:27.081860] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:38.583 [2024-12-06 15:38:27.082052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75099 ] 00:11:38.583 [2024-12-06 15:38:27.236994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.842 [2024-12-06 15:38:27.295345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.842 [2024-12-06 15:38:27.295501] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:38.842 [2024-12-06 15:38:27.295538] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:38.842 [2024-12-06 15:38:27.295558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:38.842 00:11:38.842 real 0m0.441s 00:11:38.842 user 0m0.209s 00:11:38.842 sys 0m0.128s 00:11:38.842 15:38:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.842 15:38:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:38.842 ************************************ 00:11:38.842 END TEST bdev_json_nonenclosed 00:11:38.842 ************************************ 00:11:38.842 15:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:38.842 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:38.842 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.842 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:38.842 ************************************ 00:11:38.842 START TEST bdev_json_nonarray 00:11:38.842 ************************************ 00:11:38.842 15:38:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:39.102 [2024-12-06 15:38:27.570601] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:39.102 [2024-12-06 15:38:27.570805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75124 ] 00:11:39.102 [2024-12-06 15:38:27.726691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.102 [2024-12-06 15:38:27.790394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.102 [2024-12-06 15:38:27.790567] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:39.102 [2024-12-06 15:38:27.790593] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:39.102 [2024-12-06 15:38:27.790611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:39.361 00:11:39.361 real 0m0.442s 00:11:39.361 user 0m0.217s 00:11:39.361 sys 0m0.121s 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.361 ************************************ 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:39.361 END TEST bdev_json_nonarray 00:11:39.361 ************************************ 00:11:39.361 15:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:39.361 15:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:39.361 15:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:39.361 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.361 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.361 15:38:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:39.361 ************************************ 00:11:39.361 START TEST bdev_gpt_uuid 00:11:39.361 ************************************ 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75150 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75150 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75150 ']' 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.361 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.362 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.362 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.362 15:38:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:39.620 [2024-12-06 15:38:28.098279] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:39.620 [2024-12-06 15:38:28.098478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75150 ] 00:11:39.620 [2024-12-06 15:38:28.256232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.880 [2024-12-06 15:38:28.318120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.446 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.446 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:40.446 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:40.446 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.446 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:41.013 Some configs were skipped because the RPC state that can call them passed over. 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.013 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:41.013 { 00:11:41.013 "name": "Nvme1n1p1", 00:11:41.013 "aliases": [ 00:11:41.013 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:41.013 ], 00:11:41.013 "product_name": "GPT Disk", 00:11:41.013 "block_size": 4096, 00:11:41.013 "num_blocks": 655104, 00:11:41.013 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:41.013 "assigned_rate_limits": { 00:11:41.013 "rw_ios_per_sec": 0, 00:11:41.013 "rw_mbytes_per_sec": 0, 00:11:41.013 "r_mbytes_per_sec": 0, 00:11:41.013 "w_mbytes_per_sec": 0 00:11:41.013 }, 00:11:41.013 "claimed": false, 00:11:41.013 "zoned": false, 00:11:41.013 "supported_io_types": { 00:11:41.013 "read": true, 00:11:41.013 "write": true, 00:11:41.013 "unmap": true, 00:11:41.013 "flush": true, 00:11:41.014 "reset": true, 00:11:41.014 "nvme_admin": false, 00:11:41.014 "nvme_io": false, 00:11:41.014 "nvme_io_md": false, 00:11:41.014 "write_zeroes": true, 00:11:41.014 "zcopy": false, 00:11:41.014 "get_zone_info": false, 00:11:41.014 "zone_management": false, 00:11:41.014 "zone_append": false, 00:11:41.014 "compare": true, 00:11:41.014 "compare_and_write": false, 00:11:41.014 "abort": true, 00:11:41.014 "seek_hole": false, 00:11:41.014 "seek_data": false, 00:11:41.014 "copy": true, 00:11:41.014 "nvme_iov_md": false 00:11:41.014 }, 00:11:41.014 "driver_specific": { 00:11:41.014 "gpt": { 00:11:41.014 "base_bdev": "Nvme1n1", 00:11:41.014 "offset_blocks": 256, 00:11:41.014 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:41.014 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:41.014 "partition_name": "SPDK_TEST_first" 00:11:41.014 } 00:11:41.014 } 00:11:41.014 } 00:11:41.014 ]' 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:41.014 { 00:11:41.014 "name": "Nvme1n1p2", 00:11:41.014 "aliases": [ 00:11:41.014 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:41.014 ], 00:11:41.014 "product_name": "GPT Disk", 00:11:41.014 "block_size": 4096, 00:11:41.014 "num_blocks": 655103, 00:11:41.014 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:41.014 "assigned_rate_limits": { 00:11:41.014 "rw_ios_per_sec": 0, 00:11:41.014 "rw_mbytes_per_sec": 0, 00:11:41.014 "r_mbytes_per_sec": 0, 00:11:41.014 "w_mbytes_per_sec": 0 00:11:41.014 }, 00:11:41.014 "claimed": false, 00:11:41.014 "zoned": false, 00:11:41.014 "supported_io_types": { 00:11:41.014 "read": true, 00:11:41.014 "write": true, 00:11:41.014 "unmap": true, 00:11:41.014 "flush": true, 00:11:41.014 "reset": true, 00:11:41.014 "nvme_admin": false, 00:11:41.014 "nvme_io": false, 00:11:41.014 "nvme_io_md": false, 00:11:41.014 "write_zeroes": true, 00:11:41.014 "zcopy": false, 00:11:41.014 "get_zone_info": false, 00:11:41.014 "zone_management": false, 00:11:41.014 "zone_append": false, 00:11:41.014 "compare": true, 00:11:41.014 "compare_and_write": false, 00:11:41.014 "abort": true, 00:11:41.014 "seek_hole": false, 00:11:41.014 "seek_data": false, 00:11:41.014 "copy": true, 00:11:41.014 "nvme_iov_md": false 00:11:41.014 }, 00:11:41.014 "driver_specific": { 00:11:41.014 "gpt": { 00:11:41.014 "base_bdev": "Nvme1n1", 00:11:41.014 "offset_blocks": 655360, 00:11:41.014 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:41.014 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:41.014 "partition_name": "SPDK_TEST_second" 00:11:41.014 } 00:11:41.014 } 00:11:41.014 } 00:11:41.014 ]' 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:41.014 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75150 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75150 ']' 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75150 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75150 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.273 killing process with pid 75150 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75150' 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75150 00:11:41.273 15:38:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75150 00:11:41.840 00:11:41.840 real 0m2.330s 00:11:41.840 user 0m2.610s 00:11:41.840 sys 0m0.549s 00:11:41.840 15:38:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.840 15:38:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:41.840 ************************************ 00:11:41.840 END TEST bdev_gpt_uuid 00:11:41.840 ************************************ 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:41.840 15:38:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:42.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:42.357 Waiting for block devices as requested 00:11:42.357 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.615 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.615 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.615 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.906 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:47.906 15:38:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:47.906 15:38:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:47.906 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:47.906 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:47.906 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:47.906 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:47.906 15:38:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:47.906 00:11:47.906 real 0m55.815s 00:11:47.906 user 1m11.140s 00:11:47.906 sys 0m10.896s 00:11:47.906 15:38:36 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.164 ************************************ 00:11:48.164 END TEST blockdev_nvme_gpt 00:11:48.164 15:38:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:48.164 ************************************ 00:11:48.164 15:38:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:48.164 15:38:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.164 15:38:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.164 15:38:36 -- common/autotest_common.sh@10 -- # set +x 00:11:48.164 ************************************ 00:11:48.164 START TEST nvme 00:11:48.164 ************************************ 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:48.164 * Looking for test storage... 00:11:48.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.164 15:38:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.164 15:38:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.164 15:38:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.164 15:38:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.164 15:38:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.164 15:38:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:48.164 15:38:36 nvme -- scripts/common.sh@345 -- # : 1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.164 15:38:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.164 15:38:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@353 -- # local d=1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.164 15:38:36 nvme -- scripts/common.sh@355 -- # echo 1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.164 15:38:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@353 -- # local d=2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.164 15:38:36 nvme -- scripts/common.sh@355 -- # echo 2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.164 15:38:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.164 15:38:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.164 15:38:36 nvme -- scripts/common.sh@368 -- # return 0 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.164 --rc genhtml_branch_coverage=1 00:11:48.164 --rc genhtml_function_coverage=1 00:11:48.164 --rc genhtml_legend=1 00:11:48.164 --rc geninfo_all_blocks=1 00:11:48.164 --rc geninfo_unexecuted_blocks=1 00:11:48.164 00:11:48.164 ' 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.164 --rc genhtml_branch_coverage=1 00:11:48.164 --rc genhtml_function_coverage=1 00:11:48.164 --rc genhtml_legend=1 00:11:48.164 --rc geninfo_all_blocks=1 00:11:48.164 --rc geninfo_unexecuted_blocks=1 00:11:48.164 00:11:48.164 ' 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.164 --rc genhtml_branch_coverage=1 00:11:48.164 --rc genhtml_function_coverage=1 00:11:48.164 --rc genhtml_legend=1 00:11:48.164 --rc geninfo_all_blocks=1 00:11:48.164 --rc geninfo_unexecuted_blocks=1 00:11:48.164 00:11:48.164 ' 00:11:48.164 15:38:36 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.164 --rc genhtml_branch_coverage=1 00:11:48.164 --rc genhtml_function_coverage=1 00:11:48.164 --rc genhtml_legend=1 00:11:48.164 --rc geninfo_all_blocks=1 00:11:48.164 --rc geninfo_unexecuted_blocks=1 00:11:48.164 00:11:48.164 ' 00:11:48.164 15:38:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:49.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.297 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.556 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.556 15:38:38 nvme -- nvme/nvme.sh@79 -- # uname 00:11:49.556 15:38:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:49.556 15:38:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:49.556 15:38:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1075 -- # stubpid=75781 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:49.556 Waiting for stub to ready for secondary processes... 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/75781 ]] 00:11:49.556 15:38:38 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:49.556 [2024-12-06 15:38:38.178875] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:49.556 [2024-12-06 15:38:38.179071] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:50.492 15:38:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:50.492 15:38:39 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/75781 ]] 00:11:50.492 15:38:39 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:51.059 [2024-12-06 15:38:39.610875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.059 [2024-12-06 15:38:39.658767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.059 [2024-12-06 15:38:39.658804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.059 [2024-12-06 15:38:39.658818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.059 [2024-12-06 15:38:39.678523] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:51.059 [2024-12-06 15:38:39.678618] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:51.059 [2024-12-06 15:38:39.691532] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:51.059 [2024-12-06 15:38:39.691837] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:51.059 [2024-12-06 15:38:39.692860] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:51.059 [2024-12-06 15:38:39.693206] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:51.059 [2024-12-06 15:38:39.693504] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:51.059 [2024-12-06 15:38:39.694379] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:51.059 [2024-12-06 15:38:39.694865] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:51.059 [2024-12-06 15:38:39.695017] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:51.060 [2024-12-06 15:38:39.697381] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:51.060 [2024-12-06 15:38:39.698079] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:51.060 [2024-12-06 15:38:39.698281] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:51.060 [2024-12-06 15:38:39.698403] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:51.060 [2024-12-06 15:38:39.698530] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:51.628 15:38:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:51.628 done. 00:11:51.628 15:38:40 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:51.628 15:38:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:51.628 15:38:40 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:51.628 15:38:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.628 15:38:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:51.628 ************************************ 00:11:51.628 START TEST nvme_reset 00:11:51.628 ************************************ 00:11:51.628 15:38:40 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:51.887 Initializing NVMe Controllers 00:11:51.887 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:51.887 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:51.887 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:51.887 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:51.887 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:51.887 00:11:51.887 real 0m0.283s 00:11:51.887 user 0m0.103s 00:11:51.887 sys 0m0.135s 00:11:51.887 15:38:40 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.887 15:38:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:51.887 ************************************ 00:11:51.887 END TEST nvme_reset 00:11:51.887 ************************************ 00:11:51.888 15:38:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:51.888 15:38:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.888 15:38:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.888 15:38:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:51.888 ************************************ 00:11:51.888 START TEST nvme_identify 00:11:51.888 ************************************ 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:51.888 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:51.888 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:51.888 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:51.888 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:51.888 15:38:40 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:52.149 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:52.149 [2024-12-06 15:38:40.826581] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 75814 terminated unexpected 00:11:52.149 ===================================================== 00:11:52.149 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:52.149 ===================================================== 00:11:52.149 Controller Capabilities/Features 00:11:52.149 ================================ 00:11:52.149 Vendor ID: 1b36 00:11:52.149 Subsystem Vendor ID: 1af4 00:11:52.149 Serial Number: 12343 00:11:52.149 Model Number: QEMU NVMe Ctrl 00:11:52.149 Firmware Version: 8.0.0 00:11:52.149 Recommended Arb Burst: 6 00:11:52.149 IEEE OUI Identifier: 00 54 52 00:11:52.149 Multi-path I/O 00:11:52.149 May have multiple subsystem ports: No 00:11:52.149 May have multiple controllers: Yes 00:11:52.149 Associated with SR-IOV VF: No 00:11:52.149 Max Data Transfer Size: 524288 00:11:52.149 Max Number of Namespaces: 256 00:11:52.149 Max Number of I/O Queues: 64 00:11:52.149 NVMe Specification Version (VS): 1.4 00:11:52.149 NVMe Specification Version (Identify): 1.4 00:11:52.149 Maximum Queue Entries: 2048 00:11:52.150 Contiguous Queues Required: Yes 00:11:52.150 Arbitration Mechanisms Supported 00:11:52.150 Weighted Round Robin: Not Supported 00:11:52.150 Vendor Specific: Not Supported 00:11:52.150 Reset Timeout: 7500 ms 00:11:52.150 Doorbell Stride: 4 bytes 00:11:52.150 NVM Subsystem Reset: Not Supported 00:11:52.150 Command Sets Supported 00:11:52.150 NVM Command Set: Supported 00:11:52.150 Boot Partition: Not Supported 00:11:52.150 Memory Page Size Minimum: 4096 bytes 00:11:52.150 Memory Page Size Maximum: 65536 bytes 00:11:52.150 Persistent Memory Region: Not Supported 00:11:52.150 Optional Asynchronous Events Supported 00:11:52.150 Namespace Attribute Notices: Supported 00:11:52.150 Firmware Activation Notices: Not Supported 00:11:52.150 ANA Change Notices: Not Supported 00:11:52.150 PLE Aggregate Log Change Notices: Not Supported 00:11:52.150 LBA Status Info Alert Notices: Not Supported 00:11:52.150 EGE Aggregate Log Change Notices: Not Supported 00:11:52.150 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.150 Zone Descriptor Change Notices: Not Supported 00:11:52.150 Discovery Log Change Notices: Not Supported 00:11:52.150 Controller Attributes 00:11:52.150 128-bit Host Identifier: Not Supported 00:11:52.150 Non-Operational Permissive Mode: Not Supported 00:11:52.150 NVM Sets: Not Supported 00:11:52.150 Read Recovery Levels: Not Supported 00:11:52.150 Endurance Groups: Supported 00:11:52.150 Predictable Latency Mode: Not Supported 00:11:52.150 Traffic Based Keep ALive: Not Supported 00:11:52.150 Namespace Granularity: Not Supported 00:11:52.150 SQ Associations: Not Supported 00:11:52.150 UUID List: Not Supported 00:11:52.150 Multi-Domain Subsystem: Not Supported 00:11:52.150 Fixed Capacity Management: Not Supported 00:11:52.150 Variable Capacity Management: Not Supported 00:11:52.150 Delete Endurance Group: Not Supported 00:11:52.150 Delete NVM Set: Not Supported 00:11:52.150 Extended LBA Formats Supported: Supported 00:11:52.150 Flexible Data Placement Supported: Supported 00:11:52.150 00:11:52.150 Controller Memory Buffer Support 00:11:52.150 ================================ 00:11:52.150 Supported: No 00:11:52.150 00:11:52.150 Persistent Memory Region Support 00:11:52.150 ================================ 00:11:52.150 Supported: No 00:11:52.150 00:11:52.150 Admin Command Set Attributes 00:11:52.150 ============================ 00:11:52.150 Security Send/Receive: Not Supported 00:11:52.150 Format NVM: Supported 00:11:52.150 Firmware Activate/Download: Not Supported 00:11:52.150 Namespace Management: Supported 00:11:52.150 Device Self-Test: Not Supported 00:11:52.150 Directives: Supported 00:11:52.150 NVMe-MI: Not Supported 00:11:52.150 Virtualization Management: Not Supported 00:11:52.150 Doorbell Buffer Config: Supported 00:11:52.150 Get LBA Status Capability: Not Supported 00:11:52.150 Command & Feature Lockdown Capability: Not Supported 00:11:52.150 Abort Command Limit: 4 00:11:52.150 Async Event Request Limit: 4 00:11:52.150 Number of Firmware Slots: N/A 00:11:52.150 Firmware Slot 1 Read-Only: N/A 00:11:52.150 Firmware Activation Without Reset: N/A 00:11:52.150 Multiple Update Detection Support: N/A 00:11:52.150 Firmware Update Granularity: No Information Provided 00:11:52.150 Per-Namespace SMART Log: Yes 00:11:52.150 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.150 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:52.150 Command Effects Log Page: Supported 00:11:52.150 Get Log Page Extended Data: Supported 00:11:52.150 Telemetry Log Pages: Not Supported 00:11:52.150 Persistent Event Log Pages: Not Supported 00:11:52.150 Supported Log Pages Log Page: May Support 00:11:52.150 Commands Supported & Effects Log Page: Not Supported 00:11:52.150 Feature Identifiers & Effects Log Page:May Support 00:11:52.150 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.150 Data Area 4 for Telemetry Log: Not Supported 00:11:52.150 Error Log Page Entries Supported: 1 00:11:52.150 Keep Alive: Not Supported 00:11:52.150 00:11:52.150 NVM Command Set Attributes 00:11:52.150 ========================== 00:11:52.150 Submission Queue Entry Size 00:11:52.150 Max: 64 00:11:52.150 Min: 64 00:11:52.150 Completion Queue Entry Size 00:11:52.150 Max: 16 00:11:52.150 Min: 16 00:11:52.150 Number of Namespaces: 256 00:11:52.150 Compare Command: Supported 00:11:52.150 Write Uncorrectable Command: Not Supported 00:11:52.150 Dataset Management Command: Supported 00:11:52.150 Write Zeroes Command: Supported 00:11:52.150 Set Features Save Field: Supported 00:11:52.150 Reservations: Not Supported 00:11:52.150 Timestamp: Supported 00:11:52.150 Copy: Supported 00:11:52.150 Volatile Write Cache: Present 00:11:52.150 Atomic Write Unit (Normal): 1 00:11:52.150 Atomic Write Unit (PFail): 1 00:11:52.150 Atomic Compare & Write Unit: 1 00:11:52.150 Fused Compare & Write: Not Supported 00:11:52.150 Scatter-Gather List 00:11:52.150 SGL Command Set: Supported 00:11:52.150 SGL Keyed: Not Supported 00:11:52.150 SGL Bit Bucket Descriptor: Not Supported 00:11:52.150 SGL Metadata Pointer: Not Supported 00:11:52.150 Oversized SGL: Not Supported 00:11:52.150 SGL Metadata Address: Not Supported 00:11:52.150 SGL Offset: Not Supported 00:11:52.150 Transport SGL Data Block: Not Supported 00:11:52.150 Replay Protected Memory Block: Not Supported 00:11:52.150 00:11:52.150 Firmware Slot Information 00:11:52.150 ========================= 00:11:52.150 Active slot: 1 00:11:52.150 Slot 1 Firmware Revision: 1.0 00:11:52.150 00:11:52.150 00:11:52.150 Commands Supported and Effects 00:11:52.150 ============================== 00:11:52.150 Admin Commands 00:11:52.150 -------------- 00:11:52.150 Delete I/O Submission Queue (00h): Supported 00:11:52.150 Create I/O Submission Queue (01h): Supported 00:11:52.150 Get Log Page (02h): Supported 00:11:52.150 Delete I/O Completion Queue (04h): Supported 00:11:52.150 Create I/O Completion Queue (05h): Supported 00:11:52.150 Identify (06h): Supported 00:11:52.150 Abort (08h): Supported 00:11:52.150 Set Features (09h): Supported 00:11:52.150 Get Features (0Ah): Supported 00:11:52.150 Asynchronous Event Request (0Ch): Supported 00:11:52.150 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.150 Directive Send (19h): Supported 00:11:52.150 Directive Receive (1Ah): Supported 00:11:52.150 Virtualization Management (1Ch): Supported 00:11:52.150 Doorbell Buffer Config (7Ch): Supported 00:11:52.150 Format NVM (80h): Supported LBA-Change 00:11:52.150 I/O Commands 00:11:52.150 ------------ 00:11:52.150 Flush (00h): Supported LBA-Change 00:11:52.150 Write (01h): Supported LBA-Change 00:11:52.150 Read (02h): Supported 00:11:52.150 Compare (05h): Supported 00:11:52.150 Write Zeroes (08h): Supported LBA-Change 00:11:52.150 Dataset Management (09h): Supported LBA-Change 00:11:52.150 Unknown (0Ch): Supported 00:11:52.150 Unknown (12h): Supported 00:11:52.150 Copy (19h): Supported LBA-Change 00:11:52.150 Unknown (1Dh): Supported LBA-Change 00:11:52.150 00:11:52.150 Error Log 00:11:52.150 ========= 00:11:52.150 00:11:52.150 Arbitration 00:11:52.150 =========== 00:11:52.150 Arbitration Burst: no limit 00:11:52.150 00:11:52.150 Power Management 00:11:52.150 ================ 00:11:52.150 Number of Power States: 1 00:11:52.150 Current Power State: Power State #0 00:11:52.150 Power State #0: 00:11:52.150 Max Power: 25.00 W 00:11:52.150 Non-Operational State: Operational 00:11:52.150 Entry Latency: 16 microseconds 00:11:52.150 Exit Latency: 4 microseconds 00:11:52.150 Relative Read Throughput: 0 00:11:52.150 Relative Read Latency: 0 00:11:52.150 Relative Write Throughput: 0 00:11:52.150 Relative Write Latency: 0 00:11:52.150 Idle Power: Not Reported 00:11:52.150 Active Power: Not Reported 00:11:52.150 Non-Operational Permissive Mode: Not Supported 00:11:52.150 00:11:52.150 Health Information 00:11:52.150 ================== 00:11:52.150 Critical Warnings: 00:11:52.150 Available Spare Space: OK 00:11:52.150 Temperature: OK 00:11:52.150 Device Reliability: OK 00:11:52.150 Read Only: No 00:11:52.150 Volatile Memory Backup: OK 00:11:52.150 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.150 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.150 Available Spare: 0% 00:11:52.150 Available Spare Threshold: 0% 00:11:52.150 Life Percentage Used: 0% 00:11:52.150 Data Units Read: 815 00:11:52.150 Data Units Written: 744 00:11:52.150 Host Read Commands: 34140 00:11:52.150 Host Write Commands: 33564 00:11:52.150 Controller Busy Time: 0 minutes 00:11:52.150 Power Cycles: 0 00:11:52.150 Power On Hours: 0 hours 00:11:52.150 Unsafe Shutdowns: 0 00:11:52.150 Unrecoverable Media Errors: 0 00:11:52.150 Lifetime Error Log Entries: 0 00:11:52.150 Warning Temperature Time: 0 minutes 00:11:52.150 Critical Temperature Time: 0 minutes 00:11:52.150 00:11:52.150 Number of Queues 00:11:52.150 ================ 00:11:52.150 Number of I/O Submission Queues: 64 00:11:52.150 Number of I/O Completion Queues: 64 00:11:52.150 00:11:52.150 ZNS Specific Controller Data 00:11:52.150 ============================ 00:11:52.150 Zone Append Size Limit: 0 00:11:52.150 00:11:52.150 00:11:52.150 Active Namespaces 00:11:52.150 ================= 00:11:52.150 Namespace ID:1 00:11:52.150 Error Recovery Timeout: Unlimited 00:11:52.150 Command Set Identifier: NVM (00h) 00:11:52.150 Deallocate: Supported 00:11:52.150 Deallocated/Unwritten Error: Supported 00:11:52.150 Deallocated Read Value: All 0x00 00:11:52.150 Deallocate in Write Zeroes: Not Supported 00:11:52.150 Deallocated Guard Field: 0xFFFF 00:11:52.150 Flush: Supported 00:11:52.150 Reservation: Not Supported 00:11:52.150 Namespace Sharing Capabilities: Multiple Controllers 00:11:52.150 Size (in LBAs): 262144 (1GiB) 00:11:52.150 Capacity (in LBAs): 262144 (1GiB) 00:11:52.150 Utilization (in LBAs): 262144 (1GiB) 00:11:52.151 Thin Provisioning: Not Supported 00:11:52.151 Per-NS Atomic Units: No 00:11:52.151 Maximum Single Source Range Length: 128 00:11:52.151 Maximum Copy Length: 128 00:11:52.151 Maximum Source Range Count: 128 00:11:52.151 NGUID/EUI64 Never Reused: No 00:11:52.151 Namespace Write Protected: No 00:11:52.151 Endurance group ID: 1 00:11:52.151 Number of LBA Formats: 8 00:11:52.151 Current LBA Format: LBA Format #04 00:11:52.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.151 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.151 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.151 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.151 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.151 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.151 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.151 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.151 00:11:52.151 Get Feature FDP: 00:11:52.151 ================ 00:11:52.151 Enabled: Yes 00:11:52.151 FDP configuration index: 0 00:11:52.151 00:11:52.151 FDP configurations log page 00:11:52.151 =========================== 00:11:52.151 Number of FDP configurations: 1 00:11:52.151 Version: 0 00:11:52.151 Size: 112 00:11:52.151 FDP Configuration Descriptor: 0 00:11:52.151 Descriptor Size: 96 00:11:52.151 Reclaim Group Identifier format: 2 00:11:52.151 FDP Volatile Write Cache: Not Present 00:11:52.151 FDP Configuration: Valid 00:11:52.151 Vendor Specific Size: 0 00:11:52.151 Number of Reclaim Groups: 2 00:11:52.151 Number of Recalim Unit Handles: 8 00:11:52.151 Max Placement Identifiers: 128 00:11:52.151 Number of Namespaces Suppprted: 256 00:11:52.151 Reclaim unit Nominal Size: 6000000 bytes 00:11:52.151 Estimated Reclaim Unit Time Limit: Not Reported 00:11:52.151 RUH Desc #000: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #001: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #002: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #003: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #004: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #005: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #006: RUH Type: Initially Isolated 00:11:52.151 RUH Desc #007: RUH Type: Initially Isolated 00:11:52.151 00:11:52.151 FDP reclaim unit handle usage log page 00:11:52.151 ====================================== 00:11:52.151 Number of Reclaim Unit Handles: 8 00:11:52.151 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:52.151 RUH Usage Desc #001: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #002: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #003: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #004: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #005: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #006: RUH Attributes: Unused 00:11:52.151 RUH Usage Desc #007: RUH Attributes: Unused 00:11:52.151 00:11:52.151 FDP statistics log page 00:11:52.151 ======================= 00:11:52.151 Host bytes with metadata written: 468455424 00:11:52.151 Media bytes with metadata written: 468619264 00:11:52.151 Media bytes erased: 0 00:11:52.151 00:11:52.151 FDP events log page 00:11:52.151 =================== 00:11:52.151 Number of FDP events: 0 00:11:52.151 00:11:52.151 NVM Specific Namespace Data 00:11:52.151 =========================== 00:11:52.151 Logical Block Storage Tag Mask: 0 00:11:52.151 Protection Information Capabilities: 00:11:52.151 16b Guard Protection Information Storage Tag Support: No 00:11:52.151 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.151 Storage Tag Check Read Support: No 00:11:52.151 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.151 ===================================================== 00:11:52.151 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:52.151 ===================================================== 00:11:52.151 Controller Capabilities/Features 00:11:52.151 ================================ 00:11:52.151 Vendor ID: 1b36 00:11:52.151 Subsystem Vendor ID: 1af4 00:11:52.151 Serial Number: 12340 00:11:52.151 Model Number: QEMU NVMe Ctrl 00:11:52.151 Firmware Version: 8.0.0 00:11:52.151 Recommended Arb Burst: 6 00:11:52.151 IEEE OUI Identifier: 00 54 52 00:11:52.151 Multi-path I/O 00:11:52.151 May have multiple subsystem ports: No 00:11:52.151 May have multiple controllers: No 00:11:52.151 Associated with SR-IOV VF: No 00:11:52.151 Max Data Transfer Size: 524288 00:11:52.151 Max Number of Namespaces: 256 00:11:52.151 Max Number of I/O Queues: 64 00:11:52.151 NVMe Specification Version (VS): 1.4 00:11:52.151 NVMe Specification Version (Identify): 1.4 00:11:52.151 Maximum Queue Entries: 2048 00:11:52.151 Contiguous Queues Required: Yes 00:11:52.151 Arbitration Mechanisms Supported 00:11:52.151 Weighted Round Robin: Not Supported 00:11:52.151 Vendor Specific: Not Supported 00:11:52.151 Reset Timeout: 7500 ms 00:11:52.151 Doorbell Stride: 4 bytes 00:11:52.151 NVM Subsystem Reset: Not Supported 00:11:52.151 Command Sets Supported 00:11:52.151 NVM Command Set: Supported 00:11:52.151 Boot Partition: Not Supported 00:11:52.151 Memory Page Size Minimum: 4096 bytes 00:11:52.151 Memory Page Size Maximum: 65536 bytes 00:11:52.151 Persistent Memory Region: Not Supported 00:11:52.151 Optional Asynchronous Events Supported 00:11:52.151 Namespace Attribute Notices: Supported 00:11:52.151 Firmware Activation Notices: Not Supported 00:11:52.151 ANA Change Notices: Not Supported 00:11:52.151 PLE Aggregate Log Change Notices: Not Supported 00:11:52.151 LBA Status Info Alert Notices: Not Supported 00:11:52.151 EGE Aggregate Log Change Notices: Not Supported 00:11:52.151 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.151 Zone Descriptor Change Notices: Not Supported 00:11:52.151 Discovery Log Change Notices: Not Supported 00:11:52.151 Controller Attributes 00:11:52.151 128-bit Host Identifier: Not Supported 00:11:52.151 Non-Operational Permissive Mode: Not Supported 00:11:52.151 NVM Sets: Not Supported 00:11:52.151 Read Recovery Levels: Not Supported 00:11:52.151 Endurance Groups: [2024-12-06 15:38:40.829536] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 75814 terminated unexpected 00:11:52.151 Not Supported 00:11:52.151 Predictable Latency Mode: Not Supported 00:11:52.151 Traffic Based Keep ALive: Not Supported 00:11:52.151 Namespace Granularity: Not Supported 00:11:52.151 SQ Associations: Not Supported 00:11:52.151 UUID List: Not Supported 00:11:52.151 Multi-Domain Subsystem: Not Supported 00:11:52.151 Fixed Capacity Management: Not Supported 00:11:52.151 Variable Capacity Management: Not Supported 00:11:52.151 Delete Endurance Group: Not Supported 00:11:52.151 Delete NVM Set: Not Supported 00:11:52.151 Extended LBA Formats Supported: Supported 00:11:52.151 Flexible Data Placement Supported: Not Supported 00:11:52.151 00:11:52.151 Controller Memory Buffer Support 00:11:52.151 ================================ 00:11:52.151 Supported: No 00:11:52.151 00:11:52.151 Persistent Memory Region Support 00:11:52.151 ================================ 00:11:52.151 Supported: No 00:11:52.151 00:11:52.151 Admin Command Set Attributes 00:11:52.151 ============================ 00:11:52.151 Security Send/Receive: Not Supported 00:11:52.151 Format NVM: Supported 00:11:52.151 Firmware Activate/Download: Not Supported 00:11:52.151 Namespace Management: Supported 00:11:52.151 Device Self-Test: Not Supported 00:11:52.151 Directives: Supported 00:11:52.151 NVMe-MI: Not Supported 00:11:52.151 Virtualization Management: Not Supported 00:11:52.151 Doorbell Buffer Config: Supported 00:11:52.151 Get LBA Status Capability: Not Supported 00:11:52.151 Command & Feature Lockdown Capability: Not Supported 00:11:52.151 Abort Command Limit: 4 00:11:52.151 Async Event Request Limit: 4 00:11:52.151 Number of Firmware Slots: N/A 00:11:52.151 Firmware Slot 1 Read-Only: N/A 00:11:52.151 Firmware Activation Without Reset: N/A 00:11:52.151 Multiple Update Detection Support: N/A 00:11:52.151 Firmware Update Granularity: No Information Provided 00:11:52.151 Per-Namespace SMART Log: Yes 00:11:52.151 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.151 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:52.151 Command Effects Log Page: Supported 00:11:52.151 Get Log Page Extended Data: Supported 00:11:52.151 Telemetry Log Pages: Not Supported 00:11:52.151 Persistent Event Log Pages: Not Supported 00:11:52.151 Supported Log Pages Log Page: May Support 00:11:52.151 Commands Supported & Effects Log Page: Not Supported 00:11:52.151 Feature Identifiers & Effects Log Page:May Support 00:11:52.151 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.151 Data Area 4 for Telemetry Log: Not Supported 00:11:52.151 Error Log Page Entries Supported: 1 00:11:52.151 Keep Alive: Not Supported 00:11:52.151 00:11:52.151 NVM Command Set Attributes 00:11:52.151 ========================== 00:11:52.151 Submission Queue Entry Size 00:11:52.151 Max: 64 00:11:52.151 Min: 64 00:11:52.151 Completion Queue Entry Size 00:11:52.151 Max: 16 00:11:52.151 Min: 16 00:11:52.151 Number of Namespaces: 256 00:11:52.151 Compare Command: Supported 00:11:52.151 Write Uncorrectable Command: Not Supported 00:11:52.151 Dataset Management Command: Supported 00:11:52.151 Write Zeroes Command: Supported 00:11:52.151 Set Features Save Field: Supported 00:11:52.151 Reservations: Not Supported 00:11:52.151 Timestamp: Supported 00:11:52.151 Copy: Supported 00:11:52.151 Volatile Write Cache: Present 00:11:52.151 Atomic Write Unit (Normal): 1 00:11:52.151 Atomic Write Unit (PFail): 1 00:11:52.151 Atomic Compare & Write Unit: 1 00:11:52.151 Fused Compare & Write: Not Supported 00:11:52.151 Scatter-Gather List 00:11:52.151 SGL Command Set: Supported 00:11:52.151 SGL Keyed: Not Supported 00:11:52.151 SGL Bit Bucket Descriptor: Not Supported 00:11:52.152 SGL Metadata Pointer: Not Supported 00:11:52.152 Oversized SGL: Not Supported 00:11:52.152 SGL Metadata Address: Not Supported 00:11:52.152 SGL Offset: Not Supported 00:11:52.152 Transport SGL Data Block: Not Supported 00:11:52.152 Replay Protected Memory Block: Not Supported 00:11:52.152 00:11:52.152 Firmware Slot Information 00:11:52.152 ========================= 00:11:52.152 Active slot: 1 00:11:52.152 Slot 1 Firmware Revision: 1.0 00:11:52.152 00:11:52.152 00:11:52.152 Commands Supported and Effects 00:11:52.152 ============================== 00:11:52.152 Admin Commands 00:11:52.152 -------------- 00:11:52.152 Delete I/O Submission Queue (00h): Supported 00:11:52.152 Create I/O Submission Queue (01h): Supported 00:11:52.152 Get Log Page (02h): Supported 00:11:52.152 Delete I/O Completion Queue (04h): Supported 00:11:52.152 Create I/O Completion Queue (05h): Supported 00:11:52.152 Identify (06h): Supported 00:11:52.152 Abort (08h): Supported 00:11:52.152 Set Features (09h): Supported 00:11:52.152 Get Features (0Ah): Supported 00:11:52.152 Asynchronous Event Request (0Ch): Supported 00:11:52.152 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.152 Directive Send (19h): Supported 00:11:52.152 Directive Receive (1Ah): Supported 00:11:52.152 Virtualization Management (1Ch): Supported 00:11:52.152 Doorbell Buffer Config (7Ch): Supported 00:11:52.152 Format NVM (80h): Supported LBA-Change 00:11:52.152 I/O Commands 00:11:52.152 ------------ 00:11:52.152 Flush (00h): Supported LBA-Change 00:11:52.152 Write (01h): Supported LBA-Change 00:11:52.152 Read (02h): Supported 00:11:52.152 Compare (05h): Supported 00:11:52.152 Write Zeroes (08h): Supported LBA-Change 00:11:52.152 Dataset Management (09h): Supported LBA-Change 00:11:52.152 Unknown (0Ch): Supported 00:11:52.152 Unknown (12h): Supported 00:11:52.152 Copy (19h): Supported LBA-Change 00:11:52.152 Unknown (1Dh): Supported LBA-Change 00:11:52.152 00:11:52.152 Error Log 00:11:52.152 ========= 00:11:52.152 00:11:52.152 Arbitration 00:11:52.152 =========== 00:11:52.152 Arbitration Burst: no limit 00:11:52.152 00:11:52.152 Power Management 00:11:52.152 ================ 00:11:52.152 Number of Power States: 1 00:11:52.152 Current Power State: Power State #0 00:11:52.152 Power State #0: 00:11:52.152 Max Power: 25.00 W 00:11:52.152 Non-Operational State: Operational 00:11:52.152 Entry Latency: 16 microseconds 00:11:52.152 Exit Latency: 4 microseconds 00:11:52.152 Relative Read Throughput: 0 00:11:52.152 Relative Read Latency: 0 00:11:52.152 Relative Write Throughput: 0 00:11:52.152 Relative Write Latency: 0 00:11:52.152 Idle Power: Not Reported 00:11:52.152 Active Power: Not Reported 00:11:52.152 Non-Operational Permissive Mode: Not Supported 00:11:52.152 00:11:52.152 Health Information 00:11:52.152 ================== 00:11:52.152 Critical Warnings: 00:11:52.152 Available Spare Space: OK 00:11:52.152 Temperature: OK 00:11:52.152 Device Reliability: OK 00:11:52.152 Read Only: No 00:11:52.152 Volatile Memory Backup: OK 00:11:52.152 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.152 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.152 Available Spare: 0% 00:11:52.152 Available Spare Threshold: 0% 00:11:52.152 Life Percentage Used: 0% 00:11:52.152 Data Units Read: 684 00:11:52.152 Data Units Written: 612 00:11:52.152 Host Read Commands: 32661 00:11:52.152 Host Write Commands: 32447 00:11:52.152 Controller Busy Time: 0 minutes 00:11:52.152 Power Cycles: 0 00:11:52.152 Power On Hours: 0 hours 00:11:52.152 Unsafe Shutdowns: 0 00:11:52.152 Unrecoverable Media Errors: 0 00:11:52.152 Lifetime Error Log Entries: 0 00:11:52.152 Warning Temperature Time: 0 minutes 00:11:52.152 Critical Temperature Time: 0 minutes 00:11:52.152 00:11:52.152 Number of Queues 00:11:52.152 ================ 00:11:52.152 Number of I/O Submission Queues: 64 00:11:52.152 Number of I/O Completion Queues: 64 00:11:52.152 00:11:52.152 ZNS Specific Controller Data 00:11:52.152 ============================ 00:11:52.152 Zone Append Size Limit: 0 00:11:52.152 00:11:52.152 00:11:52.152 Active Namespaces 00:11:52.152 ================= 00:11:52.152 Namespace ID:1 00:11:52.152 Error Recovery Timeout: Unlimited 00:11:52.152 Command Set Identifier: NVM (00h) 00:11:52.152 Deallocate: Supported 00:11:52.152 Deallocated/Unwritten Error: Supported 00:11:52.152 Deallocated Read Value: All 0x00 00:11:52.152 Deallocate in Write Zeroes: Not Supported 00:11:52.152 Deallocated Guard Field: 0xFFFF 00:11:52.152 Flush: Supported 00:11:52.152 Reservation: Not Supported 00:11:52.152 Metadata Transferred as: Separate Metadata Buffer 00:11:52.152 Namespace Sharing Capabilities: Private 00:11:52.152 Size (in LBAs): 1548666 (5GiB) 00:11:52.152 Capacity (in LBAs): 1548666 (5GiB) 00:11:52.152 Utilization (in LBAs): 1548666 (5GiB) 00:11:52.152 Thin Provisioning: Not Supported 00:11:52.152 Per-NS Atomic Units: No 00:11:52.152 Maximum Single Source Range Length: 128 00:11:52.152 Maximum Copy Length: 128 00:11:52.152 Maximum Source Range Count: 128 00:11:52.152 NGUID/EUI64 Never Reused: No 00:11:52.152 Namespace Write Protected: No 00:11:52.152 Number of LBA Formats: 8 00:11:52.152 Current LBA Format: [2024-12-06 15:38:40.831416] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 75814 terminated unexpected 00:11:52.152 LBA Format #07 00:11:52.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.152 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.152 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.152 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.152 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.152 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.152 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.152 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.152 00:11:52.152 NVM Specific Namespace Data 00:11:52.152 =========================== 00:11:52.152 Logical Block Storage Tag Mask: 0 00:11:52.152 Protection Information Capabilities: 00:11:52.152 16b Guard Protection Information Storage Tag Support: No 00:11:52.152 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.152 Storage Tag Check Read Support: No 00:11:52.152 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.152 ===================================================== 00:11:52.152 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:52.152 ===================================================== 00:11:52.152 Controller Capabilities/Features 00:11:52.152 ================================ 00:11:52.152 Vendor ID: 1b36 00:11:52.152 Subsystem Vendor ID: 1af4 00:11:52.152 Serial Number: 12341 00:11:52.152 Model Number: QEMU NVMe Ctrl 00:11:52.152 Firmware Version: 8.0.0 00:11:52.152 Recommended Arb Burst: 6 00:11:52.152 IEEE OUI Identifier: 00 54 52 00:11:52.152 Multi-path I/O 00:11:52.152 May have multiple subsystem ports: No 00:11:52.152 May have multiple controllers: No 00:11:52.152 Associated with SR-IOV VF: No 00:11:52.152 Max Data Transfer Size: 524288 00:11:52.152 Max Number of Namespaces: 256 00:11:52.152 Max Number of I/O Queues: 64 00:11:52.152 NVMe Specification Version (VS): 1.4 00:11:52.152 NVMe Specification Version (Identify): 1.4 00:11:52.152 Maximum Queue Entries: 2048 00:11:52.152 Contiguous Queues Required: Yes 00:11:52.152 Arbitration Mechanisms Supported 00:11:52.152 Weighted Round Robin: Not Supported 00:11:52.152 Vendor Specific: Not Supported 00:11:52.152 Reset Timeout: 7500 ms 00:11:52.152 Doorbell Stride: 4 bytes 00:11:52.152 NVM Subsystem Reset: Not Supported 00:11:52.152 Command Sets Supported 00:11:52.152 NVM Command Set: Supported 00:11:52.152 Boot Partition: Not Supported 00:11:52.152 Memory Page Size Minimum: 4096 bytes 00:11:52.152 Memory Page Size Maximum: 65536 bytes 00:11:52.152 Persistent Memory Region: Not Supported 00:11:52.152 Optional Asynchronous Events Supported 00:11:52.152 Namespace Attribute Notices: Supported 00:11:52.152 Firmware Activation Notices: Not Supported 00:11:52.152 ANA Change Notices: Not Supported 00:11:52.152 PLE Aggregate Log Change Notices: Not Supported 00:11:52.152 LBA Status Info Alert Notices: Not Supported 00:11:52.152 EGE Aggregate Log Change Notices: Not Supported 00:11:52.152 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.152 Zone Descriptor Change Notices: Not Supported 00:11:52.152 Discovery Log Change Notices: Not Supported 00:11:52.152 Controller Attributes 00:11:52.152 128-bit Host Identifier: Not Supported 00:11:52.152 Non-Operational Permissive Mode: Not Supported 00:11:52.152 NVM Sets: Not Supported 00:11:52.152 Read Recovery Levels: Not Supported 00:11:52.152 Endurance Groups: Not Supported 00:11:52.152 Predictable Latency Mode: Not Supported 00:11:52.152 Traffic Based Keep ALive: Not Supported 00:11:52.152 Namespace Granularity: Not Supported 00:11:52.152 SQ Associations: Not Supported 00:11:52.152 UUID List: Not Supported 00:11:52.152 Multi-Domain Subsystem: Not Supported 00:11:52.152 Fixed Capacity Management: Not Supported 00:11:52.152 Variable Capacity Management: Not Supported 00:11:52.152 Delete Endurance Group: Not Supported 00:11:52.152 Delete NVM Set: Not Supported 00:11:52.152 Extended LBA Formats Supported: Supported 00:11:52.152 Flexible Data Placement Supported: Not Supported 00:11:52.152 00:11:52.152 Controller Memory Buffer Support 00:11:52.152 ================================ 00:11:52.152 Supported: No 00:11:52.152 00:11:52.152 Persistent Memory Region Support 00:11:52.152 ================================ 00:11:52.152 Supported: No 00:11:52.152 00:11:52.152 Admin Command Set Attributes 00:11:52.153 ============================ 00:11:52.153 Security Send/Receive: Not Supported 00:11:52.153 Format NVM: Supported 00:11:52.153 Firmware Activate/Download: Not Supported 00:11:52.153 Namespace Management: Supported 00:11:52.153 Device Self-Test: Not Supported 00:11:52.153 Directives: Supported 00:11:52.153 NVMe-MI: Not Supported 00:11:52.153 Virtualization Management: Not Supported 00:11:52.153 Doorbell Buffer Config: Supported 00:11:52.153 Get LBA Status Capability: Not Supported 00:11:52.153 Command & Feature Lockdown Capability: Not Supported 00:11:52.153 Abort Command Limit: 4 00:11:52.153 Async Event Request Limit: 4 00:11:52.153 Number of Firmware Slots: N/A 00:11:52.153 Firmware Slot 1 Read-Only: N/A 00:11:52.153 Firmware Activation Without Reset: N/A 00:11:52.153 Multiple Update Detection Support: N/A 00:11:52.153 Firmware Update Granularity: No Information Provided 00:11:52.153 Per-Namespace SMART Log: Yes 00:11:52.153 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.153 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:52.153 Command Effects Log Page: Supported 00:11:52.153 Get Log Page Extended Data: Supported 00:11:52.153 Telemetry Log Pages: Not Supported 00:11:52.153 Persistent Event Log Pages: Not Supported 00:11:52.153 Supported Log Pages Log Page: May Support 00:11:52.153 Commands Supported & Effects Log Page: Not Supported 00:11:52.153 Feature Identifiers & Effects Log Page:May Support 00:11:52.153 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.153 Data Area 4 for Telemetry Log: Not Supported 00:11:52.153 Error Log Page Entries Supported: 1 00:11:52.153 Keep Alive: Not Supported 00:11:52.153 00:11:52.153 NVM Command Set Attributes 00:11:52.153 ========================== 00:11:52.153 Submission Queue Entry Size 00:11:52.153 Max: 64 00:11:52.153 Min: 64 00:11:52.153 Completion Queue Entry Size 00:11:52.153 Max: 16 00:11:52.153 Min: 16 00:11:52.153 Number of Namespaces: 256 00:11:52.153 Compare Command: Supported 00:11:52.153 Write Uncorrectable Command: Not Supported 00:11:52.153 Dataset Management Command: Supported 00:11:52.153 Write Zeroes Command: Supported 00:11:52.153 Set Features Save Field: Supported 00:11:52.153 Reservations: Not Supported 00:11:52.153 Timestamp: Supported 00:11:52.153 Copy: Supported 00:11:52.153 Volatile Write Cache: Present 00:11:52.153 Atomic Write Unit (Normal): 1 00:11:52.153 Atomic Write Unit (PFail): 1 00:11:52.153 Atomic Compare & Write Unit: 1 00:11:52.153 Fused Compare & Write: Not Supported 00:11:52.153 Scatter-Gather List 00:11:52.153 SGL Command Set: Supported 00:11:52.153 SGL Keyed: Not Supported 00:11:52.153 SGL Bit Bucket Descriptor: Not Supported 00:11:52.153 SGL Metadata Pointer: Not Supported 00:11:52.153 Oversized SGL: Not Supported 00:11:52.153 SGL Metadata Address: Not Supported 00:11:52.153 SGL Offset: Not Supported 00:11:52.153 Transport SGL Data Block: Not Supported 00:11:52.153 Replay Protected Memory Block: Not Supported 00:11:52.153 00:11:52.153 Firmware Slot Information 00:11:52.153 ========================= 00:11:52.153 Active slot: 1 00:11:52.153 Slot 1 Firmware Revision: 1.0 00:11:52.153 00:11:52.153 00:11:52.153 Commands Supported and Effects 00:11:52.153 ============================== 00:11:52.153 Admin Commands 00:11:52.153 -------------- 00:11:52.153 Delete I/O Submission Queue (00h): Supported 00:11:52.153 Create I/O Submission Queue (01h): Supported 00:11:52.153 Get Log Page (02h): Supported 00:11:52.153 Delete I/O Completion Queue (04h): Supported 00:11:52.153 Create I/O Completion Queue (05h): Supported 00:11:52.153 Identify (06h): Supported 00:11:52.153 Abort (08h): Supported 00:11:52.153 Set Features (09h): Supported 00:11:52.153 Get Features (0Ah): Supported 00:11:52.153 Asynchronous Event Request (0Ch): Supported 00:11:52.153 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.153 Directive Send (19h): Supported 00:11:52.153 Directive Receive (1Ah): Supported 00:11:52.153 Virtualization Management (1Ch): Supported 00:11:52.153 Doorbell Buffer Config (7Ch): Supported 00:11:52.153 Format NVM (80h): Supported LBA-Change 00:11:52.153 I/O Commands 00:11:52.153 ------------ 00:11:52.153 Flush (00h): Supported LBA-Change 00:11:52.153 Write (01h): Supported LBA-Change 00:11:52.153 Read (02h): Supported 00:11:52.153 Compare (05h): Supported 00:11:52.153 Write Zeroes (08h): Supported LBA-Change 00:11:52.153 Dataset Management (09h): Supported LBA-Change 00:11:52.153 Unknown (0Ch): Supported 00:11:52.153 Unknown (12h): Supported 00:11:52.153 Copy (19h): Supported LBA-Change 00:11:52.153 Unknown (1Dh): Supported LBA-Change 00:11:52.153 00:11:52.153 Error Log 00:11:52.153 ========= 00:11:52.153 00:11:52.153 Arbitration 00:11:52.153 =========== 00:11:52.153 Arbitration Burst: no limit 00:11:52.153 00:11:52.153 Power Management 00:11:52.153 ================ 00:11:52.153 Number of Power States: 1 00:11:52.153 Current Power State: Power State #0 00:11:52.153 Power State #0: 00:11:52.153 Max Power: 25.00 W 00:11:52.153 Non-Operational State: Operational 00:11:52.153 Entry Latency: 16 microseconds 00:11:52.153 Exit Latency: 4 microseconds 00:11:52.153 Relative Read Throughput: 0 00:11:52.153 Relative Read Latency: 0 00:11:52.153 Relative Write Throughput: 0 00:11:52.153 Relative Write Latency: 0 00:11:52.153 Idle Power: Not Reported 00:11:52.153 Active Power: Not Reported 00:11:52.153 Non-Operational Permissive Mode: Not Supported 00:11:52.153 00:11:52.153 Health Information 00:11:52.153 ================== 00:11:52.153 Critical Warnings: 00:11:52.153 Available Spare Space: OK 00:11:52.153 Temperature: OK 00:11:52.153 Device Reliability: OK 00:11:52.153 Read Only: No 00:11:52.153 Volatile Memory Backup: OK 00:11:52.153 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.153 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.153 Available Spare: 0% 00:11:52.153 Available Spare Threshold: 0% 00:11:52.153 Life Percentage Used: 0% 00:11:52.153 Data Units Read: 1047 00:11:52.153 Data Units Written: 907 00:11:52.153 Host Read Commands: 48186 00:11:52.153 Host Write Commands: 46878 00:11:52.153 Controller Busy Time: 0 minutes 00:11:52.153 Power Cycles: 0 00:11:52.153 Power On Hours: 0 hours 00:11:52.153 Unsafe Shutdowns: 0 00:11:52.153 Unrecoverable Media Errors: 0 00:11:52.153 Lifetime Error Log Entries: 0 00:11:52.153 Warning Temperature Time: 0 minutes 00:11:52.153 Critical Temperature Time: 0 minutes 00:11:52.153 00:11:52.153 Number of Queues 00:11:52.153 ================ 00:11:52.153 Number of I/O Submission Queues: 64 00:11:52.153 Number of I/O Completion Queues: 64 00:11:52.153 00:11:52.153 ZNS Specific Controller Data 00:11:52.153 ============================ 00:11:52.153 Zone Append Size Limit: 0 00:11:52.153 00:11:52.153 00:11:52.153 Active Namespaces 00:11:52.153 ================= 00:11:52.153 Namespace ID:1 00:11:52.153 Error Recovery Timeout: Unlimited 00:11:52.153 Command Set Identifier: NVM (00h) 00:11:52.153 Deallocate: Supported 00:11:52.153 Deallocated/Unwritten Error: Supported 00:11:52.153 Deallocated Read Value: All 0x00 00:11:52.153 Deallocate in Write Zeroes: Not Supported 00:11:52.153 Deallocated Guard Field: 0xFFFF 00:11:52.153 Flush: Supported 00:11:52.153 Reservation: Not Supported 00:11:52.153 Namespace Sharing Capabilities: Private 00:11:52.153 Size (in LBAs): 1310720 (5GiB) 00:11:52.153 Capacity (in LBAs): 1310720 (5GiB) 00:11:52.153 Utilization (in LBAs): 1310720 (5GiB) 00:11:52.153 Thin Provisioning: Not Supported 00:11:52.153 Per-NS Atomic Units: No 00:11:52.153 Maximum Single Source Range Length: 128 00:11:52.153 Maximum Copy Length: 128 00:11:52.153 Maximum Source Range Count: 128 00:11:52.153 NGUID/EUI64 Never Reused: No 00:11:52.153 Namespace Write Protected: No 00:11:52.153 Number of LBA Formats: 8 00:11:52.153 Current LBA Format: LBA Format #04 00:11:52.153 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.153 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.153 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.153 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.153 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.153 LBA Forma[2024-12-06 15:38:40.833009] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 75814 terminated unexpected 00:11:52.153 t #05: Data Size: 4096 Metadata Size: 8 00:11:52.153 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.153 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.153 00:11:52.153 NVM Specific Namespace Data 00:11:52.153 =========================== 00:11:52.153 Logical Block Storage Tag Mask: 0 00:11:52.153 Protection Information Capabilities: 00:11:52.153 16b Guard Protection Information Storage Tag Support: No 00:11:52.153 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.153 Storage Tag Check Read Support: No 00:11:52.153 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.153 ===================================================== 00:11:52.153 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:52.153 ===================================================== 00:11:52.153 Controller Capabilities/Features 00:11:52.153 ================================ 00:11:52.153 Vendor ID: 1b36 00:11:52.153 Subsystem Vendor ID: 1af4 00:11:52.153 Serial Number: 12342 00:11:52.153 Model Number: QEMU NVMe Ctrl 00:11:52.153 Firmware Version: 8.0.0 00:11:52.153 Recommended Arb Burst: 6 00:11:52.153 IEEE OUI Identifier: 00 54 52 00:11:52.153 Multi-path I/O 00:11:52.153 May have multiple subsystem ports: No 00:11:52.153 May have multiple controllers: No 00:11:52.153 Associated with SR-IOV VF: No 00:11:52.153 Max Data Transfer Size: 524288 00:11:52.154 Max Number of Namespaces: 256 00:11:52.154 Max Number of I/O Queues: 64 00:11:52.154 NVMe Specification Version (VS): 1.4 00:11:52.154 NVMe Specification Version (Identify): 1.4 00:11:52.154 Maximum Queue Entries: 2048 00:11:52.154 Contiguous Queues Required: Yes 00:11:52.154 Arbitration Mechanisms Supported 00:11:52.154 Weighted Round Robin: Not Supported 00:11:52.154 Vendor Specific: Not Supported 00:11:52.154 Reset Timeout: 7500 ms 00:11:52.154 Doorbell Stride: 4 bytes 00:11:52.154 NVM Subsystem Reset: Not Supported 00:11:52.154 Command Sets Supported 00:11:52.154 NVM Command Set: Supported 00:11:52.154 Boot Partition: Not Supported 00:11:52.154 Memory Page Size Minimum: 4096 bytes 00:11:52.154 Memory Page Size Maximum: 65536 bytes 00:11:52.154 Persistent Memory Region: Not Supported 00:11:52.154 Optional Asynchronous Events Supported 00:11:52.154 Namespace Attribute Notices: Supported 00:11:52.154 Firmware Activation Notices: Not Supported 00:11:52.154 ANA Change Notices: Not Supported 00:11:52.154 PLE Aggregate Log Change Notices: Not Supported 00:11:52.154 LBA Status Info Alert Notices: Not Supported 00:11:52.154 EGE Aggregate Log Change Notices: Not Supported 00:11:52.154 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.154 Zone Descriptor Change Notices: Not Supported 00:11:52.154 Discovery Log Change Notices: Not Supported 00:11:52.154 Controller Attributes 00:11:52.154 128-bit Host Identifier: Not Supported 00:11:52.154 Non-Operational Permissive Mode: Not Supported 00:11:52.154 NVM Sets: Not Supported 00:11:52.154 Read Recovery Levels: Not Supported 00:11:52.154 Endurance Groups: Not Supported 00:11:52.154 Predictable Latency Mode: Not Supported 00:11:52.154 Traffic Based Keep ALive: Not Supported 00:11:52.154 Namespace Granularity: Not Supported 00:11:52.154 SQ Associations: Not Supported 00:11:52.154 UUID List: Not Supported 00:11:52.154 Multi-Domain Subsystem: Not Supported 00:11:52.154 Fixed Capacity Management: Not Supported 00:11:52.154 Variable Capacity Management: Not Supported 00:11:52.154 Delete Endurance Group: Not Supported 00:11:52.154 Delete NVM Set: Not Supported 00:11:52.154 Extended LBA Formats Supported: Supported 00:11:52.154 Flexible Data Placement Supported: Not Supported 00:11:52.154 00:11:52.154 Controller Memory Buffer Support 00:11:52.154 ================================ 00:11:52.154 Supported: No 00:11:52.154 00:11:52.154 Persistent Memory Region Support 00:11:52.154 ================================ 00:11:52.154 Supported: No 00:11:52.154 00:11:52.154 Admin Command Set Attributes 00:11:52.154 ============================ 00:11:52.154 Security Send/Receive: Not Supported 00:11:52.154 Format NVM: Supported 00:11:52.154 Firmware Activate/Download: Not Supported 00:11:52.154 Namespace Management: Supported 00:11:52.154 Device Self-Test: Not Supported 00:11:52.154 Directives: Supported 00:11:52.154 NVMe-MI: Not Supported 00:11:52.154 Virtualization Management: Not Supported 00:11:52.154 Doorbell Buffer Config: Supported 00:11:52.154 Get LBA Status Capability: Not Supported 00:11:52.154 Command & Feature Lockdown Capability: Not Supported 00:11:52.154 Abort Command Limit: 4 00:11:52.154 Async Event Request Limit: 4 00:11:52.154 Number of Firmware Slots: N/A 00:11:52.154 Firmware Slot 1 Read-Only: N/A 00:11:52.154 Firmware Activation Without Reset: N/A 00:11:52.154 Multiple Update Detection Support: N/A 00:11:52.154 Firmware Update Granularity: No Information Provided 00:11:52.154 Per-Namespace SMART Log: Yes 00:11:52.154 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.154 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:52.154 Command Effects Log Page: Supported 00:11:52.154 Get Log Page Extended Data: Supported 00:11:52.154 Telemetry Log Pages: Not Supported 00:11:52.154 Persistent Event Log Pages: Not Supported 00:11:52.154 Supported Log Pages Log Page: May Support 00:11:52.154 Commands Supported & Effects Log Page: Not Supported 00:11:52.154 Feature Identifiers & Effects Log Page:May Support 00:11:52.154 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.154 Data Area 4 for Telemetry Log: Not Supported 00:11:52.154 Error Log Page Entries Supported: 1 00:11:52.154 Keep Alive: Not Supported 00:11:52.154 00:11:52.154 NVM Command Set Attributes 00:11:52.154 ========================== 00:11:52.154 Submission Queue Entry Size 00:11:52.154 Max: 64 00:11:52.154 Min: 64 00:11:52.154 Completion Queue Entry Size 00:11:52.154 Max: 16 00:11:52.154 Min: 16 00:11:52.154 Number of Namespaces: 256 00:11:52.154 Compare Command: Supported 00:11:52.154 Write Uncorrectable Command: Not Supported 00:11:52.154 Dataset Management Command: Supported 00:11:52.154 Write Zeroes Command: Supported 00:11:52.154 Set Features Save Field: Supported 00:11:52.154 Reservations: Not Supported 00:11:52.154 Timestamp: Supported 00:11:52.154 Copy: Supported 00:11:52.154 Volatile Write Cache: Present 00:11:52.154 Atomic Write Unit (Normal): 1 00:11:52.154 Atomic Write Unit (PFail): 1 00:11:52.154 Atomic Compare & Write Unit: 1 00:11:52.154 Fused Compare & Write: Not Supported 00:11:52.154 Scatter-Gather List 00:11:52.154 SGL Command Set: Supported 00:11:52.154 SGL Keyed: Not Supported 00:11:52.154 SGL Bit Bucket Descriptor: Not Supported 00:11:52.154 SGL Metadata Pointer: Not Supported 00:11:52.154 Oversized SGL: Not Supported 00:11:52.154 SGL Metadata Address: Not Supported 00:11:52.154 SGL Offset: Not Supported 00:11:52.154 Transport SGL Data Block: Not Supported 00:11:52.154 Replay Protected Memory Block: Not Supported 00:11:52.154 00:11:52.154 Firmware Slot Information 00:11:52.154 ========================= 00:11:52.154 Active slot: 1 00:11:52.154 Slot 1 Firmware Revision: 1.0 00:11:52.154 00:11:52.154 00:11:52.154 Commands Supported and Effects 00:11:52.154 ============================== 00:11:52.154 Admin Commands 00:11:52.154 -------------- 00:11:52.154 Delete I/O Submission Queue (00h): Supported 00:11:52.154 Create I/O Submission Queue (01h): Supported 00:11:52.154 Get Log Page (02h): Supported 00:11:52.154 Delete I/O Completion Queue (04h): Supported 00:11:52.154 Create I/O Completion Queue (05h): Supported 00:11:52.154 Identify (06h): Supported 00:11:52.154 Abort (08h): Supported 00:11:52.154 Set Features (09h): Supported 00:11:52.154 Get Features (0Ah): Supported 00:11:52.154 Asynchronous Event Request (0Ch): Supported 00:11:52.154 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.154 Directive Send (19h): Supported 00:11:52.154 Directive Receive (1Ah): Supported 00:11:52.154 Virtualization Management (1Ch): Supported 00:11:52.154 Doorbell Buffer Config (7Ch): Supported 00:11:52.154 Format NVM (80h): Supported LBA-Change 00:11:52.154 I/O Commands 00:11:52.154 ------------ 00:11:52.154 Flush (00h): Supported LBA-Change 00:11:52.154 Write (01h): Supported LBA-Change 00:11:52.154 Read (02h): Supported 00:11:52.154 Compare (05h): Supported 00:11:52.154 Write Zeroes (08h): Supported LBA-Change 00:11:52.154 Dataset Management (09h): Supported LBA-Change 00:11:52.154 Unknown (0Ch): Supported 00:11:52.154 Unknown (12h): Supported 00:11:52.154 Copy (19h): Supported LBA-Change 00:11:52.154 Unknown (1Dh): Supported LBA-Change 00:11:52.154 00:11:52.154 Error Log 00:11:52.154 ========= 00:11:52.154 00:11:52.154 Arbitration 00:11:52.154 =========== 00:11:52.154 Arbitration Burst: no limit 00:11:52.154 00:11:52.154 Power Management 00:11:52.154 ================ 00:11:52.154 Number of Power States: 1 00:11:52.154 Current Power State: Power State #0 00:11:52.154 Power State #0: 00:11:52.154 Max Power: 25.00 W 00:11:52.154 Non-Operational State: Operational 00:11:52.154 Entry Latency: 16 microseconds 00:11:52.154 Exit Latency: 4 microseconds 00:11:52.154 Relative Read Throughput: 0 00:11:52.154 Relative Read Latency: 0 00:11:52.154 Relative Write Throughput: 0 00:11:52.154 Relative Write Latency: 0 00:11:52.154 Idle Power: Not Reported 00:11:52.154 Active Power: Not Reported 00:11:52.154 Non-Operational Permissive Mode: Not Supported 00:11:52.154 00:11:52.154 Health Information 00:11:52.154 ================== 00:11:52.154 Critical Warnings: 00:11:52.154 Available Spare Space: OK 00:11:52.154 Temperature: OK 00:11:52.154 Device Reliability: OK 00:11:52.154 Read Only: No 00:11:52.154 Volatile Memory Backup: OK 00:11:52.154 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.155 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.155 Available Spare: 0% 00:11:52.155 Available Spare Threshold: 0% 00:11:52.155 Life Percentage Used: 0% 00:11:52.155 Data Units Read: 2188 00:11:52.155 Data Units Written: 1976 00:11:52.155 Host Read Commands: 100103 00:11:52.155 Host Write Commands: 98372 00:11:52.155 Controller Busy Time: 0 minutes 00:11:52.155 Power Cycles: 0 00:11:52.155 Power On Hours: 0 hours 00:11:52.155 Unsafe Shutdowns: 0 00:11:52.155 Unrecoverable Media Errors: 0 00:11:52.155 Lifetime Error Log Entries: 0 00:11:52.155 Warning Temperature Time: 0 minutes 00:11:52.155 Critical Temperature Time: 0 minutes 00:11:52.155 00:11:52.155 Number of Queues 00:11:52.155 ================ 00:11:52.155 Number of I/O Submission Queues: 64 00:11:52.155 Number of I/O Completion Queues: 64 00:11:52.155 00:11:52.155 ZNS Specific Controller Data 00:11:52.155 ============================ 00:11:52.155 Zone Append Size Limit: 0 00:11:52.155 00:11:52.155 00:11:52.155 Active Namespaces 00:11:52.155 ================= 00:11:52.155 Namespace ID:1 00:11:52.155 Error Recovery Timeout: Unlimited 00:11:52.155 Command Set Identifier: NVM (00h) 00:11:52.155 Deallocate: Supported 00:11:52.155 Deallocated/Unwritten Error: Supported 00:11:52.155 Deallocated Read Value: All 0x00 00:11:52.155 Deallocate in Write Zeroes: Not Supported 00:11:52.155 Deallocated Guard Field: 0xFFFF 00:11:52.155 Flush: Supported 00:11:52.155 Reservation: Not Supported 00:11:52.155 Namespace Sharing Capabilities: Private 00:11:52.155 Size (in LBAs): 1048576 (4GiB) 00:11:52.155 Capacity (in LBAs): 1048576 (4GiB) 00:11:52.155 Utilization (in LBAs): 1048576 (4GiB) 00:11:52.155 Thin Provisioning: Not Supported 00:11:52.155 Per-NS Atomic Units: No 00:11:52.155 Maximum Single Source Range Length: 128 00:11:52.155 Maximum Copy Length: 128 00:11:52.155 Maximum Source Range Count: 128 00:11:52.155 NGUID/EUI64 Never Reused: No 00:11:52.155 Namespace Write Protected: No 00:11:52.155 Number of LBA Formats: 8 00:11:52.155 Current LBA Format: LBA Format #04 00:11:52.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.155 00:11:52.155 NVM Specific Namespace Data 00:11:52.155 =========================== 00:11:52.155 Logical Block Storage Tag Mask: 0 00:11:52.155 Protection Information Capabilities: 00:11:52.155 16b Guard Protection Information Storage Tag Support: No 00:11:52.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.155 Storage Tag Check Read Support: No 00:11:52.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Namespace ID:2 00:11:52.155 Error Recovery Timeout: Unlimited 00:11:52.155 Command Set Identifier: NVM (00h) 00:11:52.155 Deallocate: Supported 00:11:52.155 Deallocated/Unwritten Error: Supported 00:11:52.155 Deallocated Read Value: All 0x00 00:11:52.155 Deallocate in Write Zeroes: Not Supported 00:11:52.155 Deallocated Guard Field: 0xFFFF 00:11:52.155 Flush: Supported 00:11:52.155 Reservation: Not Supported 00:11:52.155 Namespace Sharing Capabilities: Private 00:11:52.155 Size (in LBAs): 1048576 (4GiB) 00:11:52.155 Capacity (in LBAs): 1048576 (4GiB) 00:11:52.155 Utilization (in LBAs): 1048576 (4GiB) 00:11:52.155 Thin Provisioning: Not Supported 00:11:52.155 Per-NS Atomic Units: No 00:11:52.155 Maximum Single Source Range Length: 128 00:11:52.155 Maximum Copy Length: 128 00:11:52.155 Maximum Source Range Count: 128 00:11:52.155 NGUID/EUI64 Never Reused: No 00:11:52.155 Namespace Write Protected: No 00:11:52.155 Number of LBA Formats: 8 00:11:52.155 Current LBA Format: LBA Format #04 00:11:52.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.155 00:11:52.155 NVM Specific Namespace Data 00:11:52.155 =========================== 00:11:52.155 Logical Block Storage Tag Mask: 0 00:11:52.155 Protection Information Capabilities: 00:11:52.155 16b Guard Protection Information Storage Tag Support: No 00:11:52.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.155 Storage Tag Check Read Support: No 00:11:52.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.155 Namespace ID:3 00:11:52.155 Error Recovery Timeout: Unlimited 00:11:52.155 Command Set Identifier: NVM (00h) 00:11:52.155 Deallocate: Supported 00:11:52.155 Deallocated/Unwritten Error: Supported 00:11:52.155 Deallocated Read Value: All 0x00 00:11:52.155 Deallocate in Write Zeroes: Not Supported 00:11:52.155 Deallocated Guard Field: 0xFFFF 00:11:52.155 Flush: Supported 00:11:52.155 Reservation: Not Supported 00:11:52.155 Namespace Sharing Capabilities: Private 00:11:52.155 Size (in LBAs): 1048576 (4GiB) 00:11:52.414 Capacity (in LBAs): 1048576 (4GiB) 00:11:52.414 Utilization (in LBAs): 1048576 (4GiB) 00:11:52.414 Thin Provisioning: Not Supported 00:11:52.414 Per-NS Atomic Units: No 00:11:52.414 Maximum Single Source Range Length: 128 00:11:52.414 Maximum Copy Length: 128 00:11:52.414 Maximum Source Range Count: 128 00:11:52.414 NGUID/EUI64 Never Reused: No 00:11:52.414 Namespace Write Protected: No 00:11:52.414 Number of LBA Formats: 8 00:11:52.414 Current LBA Format: LBA Format #04 00:11:52.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.414 00:11:52.414 NVM Specific Namespace Data 00:11:52.414 =========================== 00:11:52.414 Logical Block Storage Tag Mask: 0 00:11:52.414 Protection Information Capabilities: 00:11:52.414 16b Guard Protection Information Storage Tag Support: No 00:11:52.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.414 Storage Tag Check Read Support: No 00:11:52.414 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.414 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:52.414 15:38:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:52.674 ===================================================== 00:11:52.674 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:52.674 ===================================================== 00:11:52.674 Controller Capabilities/Features 00:11:52.674 ================================ 00:11:52.674 Vendor ID: 1b36 00:11:52.674 Subsystem Vendor ID: 1af4 00:11:52.674 Serial Number: 12340 00:11:52.674 Model Number: QEMU NVMe Ctrl 00:11:52.674 Firmware Version: 8.0.0 00:11:52.674 Recommended Arb Burst: 6 00:11:52.674 IEEE OUI Identifier: 00 54 52 00:11:52.674 Multi-path I/O 00:11:52.674 May have multiple subsystem ports: No 00:11:52.674 May have multiple controllers: No 00:11:52.674 Associated with SR-IOV VF: No 00:11:52.674 Max Data Transfer Size: 524288 00:11:52.674 Max Number of Namespaces: 256 00:11:52.674 Max Number of I/O Queues: 64 00:11:52.674 NVMe Specification Version (VS): 1.4 00:11:52.674 NVMe Specification Version (Identify): 1.4 00:11:52.674 Maximum Queue Entries: 2048 00:11:52.674 Contiguous Queues Required: Yes 00:11:52.674 Arbitration Mechanisms Supported 00:11:52.674 Weighted Round Robin: Not Supported 00:11:52.674 Vendor Specific: Not Supported 00:11:52.674 Reset Timeout: 7500 ms 00:11:52.674 Doorbell Stride: 4 bytes 00:11:52.674 NVM Subsystem Reset: Not Supported 00:11:52.674 Command Sets Supported 00:11:52.674 NVM Command Set: Supported 00:11:52.674 Boot Partition: Not Supported 00:11:52.674 Memory Page Size Minimum: 4096 bytes 00:11:52.674 Memory Page Size Maximum: 65536 bytes 00:11:52.674 Persistent Memory Region: Not Supported 00:11:52.674 Optional Asynchronous Events Supported 00:11:52.674 Namespace Attribute Notices: Supported 00:11:52.674 Firmware Activation Notices: Not Supported 00:11:52.674 ANA Change Notices: Not Supported 00:11:52.674 PLE Aggregate Log Change Notices: Not Supported 00:11:52.674 LBA Status Info Alert Notices: Not Supported 00:11:52.674 EGE Aggregate Log Change Notices: Not Supported 00:11:52.674 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.674 Zone Descriptor Change Notices: Not Supported 00:11:52.674 Discovery Log Change Notices: Not Supported 00:11:52.674 Controller Attributes 00:11:52.674 128-bit Host Identifier: Not Supported 00:11:52.674 Non-Operational Permissive Mode: Not Supported 00:11:52.674 NVM Sets: Not Supported 00:11:52.674 Read Recovery Levels: Not Supported 00:11:52.674 Endurance Groups: Not Supported 00:11:52.674 Predictable Latency Mode: Not Supported 00:11:52.674 Traffic Based Keep ALive: Not Supported 00:11:52.674 Namespace Granularity: Not Supported 00:11:52.674 SQ Associations: Not Supported 00:11:52.674 UUID List: Not Supported 00:11:52.674 Multi-Domain Subsystem: Not Supported 00:11:52.674 Fixed Capacity Management: Not Supported 00:11:52.674 Variable Capacity Management: Not Supported 00:11:52.674 Delete Endurance Group: Not Supported 00:11:52.674 Delete NVM Set: Not Supported 00:11:52.674 Extended LBA Formats Supported: Supported 00:11:52.674 Flexible Data Placement Supported: Not Supported 00:11:52.674 00:11:52.674 Controller Memory Buffer Support 00:11:52.674 ================================ 00:11:52.674 Supported: No 00:11:52.674 00:11:52.674 Persistent Memory Region Support 00:11:52.674 ================================ 00:11:52.674 Supported: No 00:11:52.674 00:11:52.674 Admin Command Set Attributes 00:11:52.674 ============================ 00:11:52.674 Security Send/Receive: Not Supported 00:11:52.674 Format NVM: Supported 00:11:52.674 Firmware Activate/Download: Not Supported 00:11:52.674 Namespace Management: Supported 00:11:52.674 Device Self-Test: Not Supported 00:11:52.674 Directives: Supported 00:11:52.674 NVMe-MI: Not Supported 00:11:52.674 Virtualization Management: Not Supported 00:11:52.674 Doorbell Buffer Config: Supported 00:11:52.674 Get LBA Status Capability: Not Supported 00:11:52.674 Command & Feature Lockdown Capability: Not Supported 00:11:52.674 Abort Command Limit: 4 00:11:52.674 Async Event Request Limit: 4 00:11:52.674 Number of Firmware Slots: N/A 00:11:52.674 Firmware Slot 1 Read-Only: N/A 00:11:52.674 Firmware Activation Without Reset: N/A 00:11:52.674 Multiple Update Detection Support: N/A 00:11:52.674 Firmware Update Granularity: No Information Provided 00:11:52.674 Per-Namespace SMART Log: Yes 00:11:52.674 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.674 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:52.674 Command Effects Log Page: Supported 00:11:52.674 Get Log Page Extended Data: Supported 00:11:52.674 Telemetry Log Pages: Not Supported 00:11:52.674 Persistent Event Log Pages: Not Supported 00:11:52.674 Supported Log Pages Log Page: May Support 00:11:52.674 Commands Supported & Effects Log Page: Not Supported 00:11:52.674 Feature Identifiers & Effects Log Page:May Support 00:11:52.674 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.674 Data Area 4 for Telemetry Log: Not Supported 00:11:52.674 Error Log Page Entries Supported: 1 00:11:52.674 Keep Alive: Not Supported 00:11:52.674 00:11:52.674 NVM Command Set Attributes 00:11:52.674 ========================== 00:11:52.674 Submission Queue Entry Size 00:11:52.674 Max: 64 00:11:52.675 Min: 64 00:11:52.675 Completion Queue Entry Size 00:11:52.675 Max: 16 00:11:52.675 Min: 16 00:11:52.675 Number of Namespaces: 256 00:11:52.675 Compare Command: Supported 00:11:52.675 Write Uncorrectable Command: Not Supported 00:11:52.675 Dataset Management Command: Supported 00:11:52.675 Write Zeroes Command: Supported 00:11:52.675 Set Features Save Field: Supported 00:11:52.675 Reservations: Not Supported 00:11:52.675 Timestamp: Supported 00:11:52.675 Copy: Supported 00:11:52.675 Volatile Write Cache: Present 00:11:52.675 Atomic Write Unit (Normal): 1 00:11:52.675 Atomic Write Unit (PFail): 1 00:11:52.675 Atomic Compare & Write Unit: 1 00:11:52.675 Fused Compare & Write: Not Supported 00:11:52.675 Scatter-Gather List 00:11:52.675 SGL Command Set: Supported 00:11:52.675 SGL Keyed: Not Supported 00:11:52.675 SGL Bit Bucket Descriptor: Not Supported 00:11:52.675 SGL Metadata Pointer: Not Supported 00:11:52.675 Oversized SGL: Not Supported 00:11:52.675 SGL Metadata Address: Not Supported 00:11:52.675 SGL Offset: Not Supported 00:11:52.675 Transport SGL Data Block: Not Supported 00:11:52.675 Replay Protected Memory Block: Not Supported 00:11:52.675 00:11:52.675 Firmware Slot Information 00:11:52.675 ========================= 00:11:52.675 Active slot: 1 00:11:52.675 Slot 1 Firmware Revision: 1.0 00:11:52.675 00:11:52.675 00:11:52.675 Commands Supported and Effects 00:11:52.675 ============================== 00:11:52.675 Admin Commands 00:11:52.675 -------------- 00:11:52.675 Delete I/O Submission Queue (00h): Supported 00:11:52.675 Create I/O Submission Queue (01h): Supported 00:11:52.675 Get Log Page (02h): Supported 00:11:52.675 Delete I/O Completion Queue (04h): Supported 00:11:52.675 Create I/O Completion Queue (05h): Supported 00:11:52.675 Identify (06h): Supported 00:11:52.675 Abort (08h): Supported 00:11:52.675 Set Features (09h): Supported 00:11:52.675 Get Features (0Ah): Supported 00:11:52.675 Asynchronous Event Request (0Ch): Supported 00:11:52.675 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.675 Directive Send (19h): Supported 00:11:52.675 Directive Receive (1Ah): Supported 00:11:52.675 Virtualization Management (1Ch): Supported 00:11:52.675 Doorbell Buffer Config (7Ch): Supported 00:11:52.675 Format NVM (80h): Supported LBA-Change 00:11:52.675 I/O Commands 00:11:52.675 ------------ 00:11:52.675 Flush (00h): Supported LBA-Change 00:11:52.675 Write (01h): Supported LBA-Change 00:11:52.675 Read (02h): Supported 00:11:52.675 Compare (05h): Supported 00:11:52.675 Write Zeroes (08h): Supported LBA-Change 00:11:52.675 Dataset Management (09h): Supported LBA-Change 00:11:52.675 Unknown (0Ch): Supported 00:11:52.675 Unknown (12h): Supported 00:11:52.675 Copy (19h): Supported LBA-Change 00:11:52.675 Unknown (1Dh): Supported LBA-Change 00:11:52.675 00:11:52.675 Error Log 00:11:52.675 ========= 00:11:52.675 00:11:52.675 Arbitration 00:11:52.675 =========== 00:11:52.675 Arbitration Burst: no limit 00:11:52.675 00:11:52.675 Power Management 00:11:52.675 ================ 00:11:52.675 Number of Power States: 1 00:11:52.675 Current Power State: Power State #0 00:11:52.675 Power State #0: 00:11:52.675 Max Power: 25.00 W 00:11:52.675 Non-Operational State: Operational 00:11:52.675 Entry Latency: 16 microseconds 00:11:52.675 Exit Latency: 4 microseconds 00:11:52.675 Relative Read Throughput: 0 00:11:52.675 Relative Read Latency: 0 00:11:52.675 Relative Write Throughput: 0 00:11:52.675 Relative Write Latency: 0 00:11:52.675 Idle Power: Not Reported 00:11:52.675 Active Power: Not Reported 00:11:52.675 Non-Operational Permissive Mode: Not Supported 00:11:52.675 00:11:52.675 Health Information 00:11:52.675 ================== 00:11:52.675 Critical Warnings: 00:11:52.675 Available Spare Space: OK 00:11:52.675 Temperature: OK 00:11:52.675 Device Reliability: OK 00:11:52.675 Read Only: No 00:11:52.675 Volatile Memory Backup: OK 00:11:52.675 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.675 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.675 Available Spare: 0% 00:11:52.675 Available Spare Threshold: 0% 00:11:52.675 Life Percentage Used: 0% 00:11:52.675 Data Units Read: 684 00:11:52.675 Data Units Written: 612 00:11:52.675 Host Read Commands: 32661 00:11:52.675 Host Write Commands: 32447 00:11:52.675 Controller Busy Time: 0 minutes 00:11:52.675 Power Cycles: 0 00:11:52.675 Power On Hours: 0 hours 00:11:52.675 Unsafe Shutdowns: 0 00:11:52.675 Unrecoverable Media Errors: 0 00:11:52.675 Lifetime Error Log Entries: 0 00:11:52.675 Warning Temperature Time: 0 minutes 00:11:52.675 Critical Temperature Time: 0 minutes 00:11:52.675 00:11:52.675 Number of Queues 00:11:52.675 ================ 00:11:52.675 Number of I/O Submission Queues: 64 00:11:52.675 Number of I/O Completion Queues: 64 00:11:52.675 00:11:52.675 ZNS Specific Controller Data 00:11:52.675 ============================ 00:11:52.675 Zone Append Size Limit: 0 00:11:52.675 00:11:52.675 00:11:52.675 Active Namespaces 00:11:52.675 ================= 00:11:52.675 Namespace ID:1 00:11:52.675 Error Recovery Timeout: Unlimited 00:11:52.675 Command Set Identifier: NVM (00h) 00:11:52.675 Deallocate: Supported 00:11:52.675 Deallocated/Unwritten Error: Supported 00:11:52.675 Deallocated Read Value: All 0x00 00:11:52.675 Deallocate in Write Zeroes: Not Supported 00:11:52.675 Deallocated Guard Field: 0xFFFF 00:11:52.675 Flush: Supported 00:11:52.675 Reservation: Not Supported 00:11:52.675 Metadata Transferred as: Separate Metadata Buffer 00:11:52.675 Namespace Sharing Capabilities: Private 00:11:52.675 Size (in LBAs): 1548666 (5GiB) 00:11:52.675 Capacity (in LBAs): 1548666 (5GiB) 00:11:52.675 Utilization (in LBAs): 1548666 (5GiB) 00:11:52.675 Thin Provisioning: Not Supported 00:11:52.675 Per-NS Atomic Units: No 00:11:52.675 Maximum Single Source Range Length: 128 00:11:52.675 Maximum Copy Length: 128 00:11:52.675 Maximum Source Range Count: 128 00:11:52.675 NGUID/EUI64 Never Reused: No 00:11:52.675 Namespace Write Protected: No 00:11:52.675 Number of LBA Formats: 8 00:11:52.675 Current LBA Format: LBA Format #07 00:11:52.675 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.675 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.675 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.675 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.675 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.675 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.675 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.675 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.675 00:11:52.675 NVM Specific Namespace Data 00:11:52.675 =========================== 00:11:52.675 Logical Block Storage Tag Mask: 0 00:11:52.675 Protection Information Capabilities: 00:11:52.675 16b Guard Protection Information Storage Tag Support: No 00:11:52.675 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.675 Storage Tag Check Read Support: No 00:11:52.675 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.676 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:52.676 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:52.935 ===================================================== 00:11:52.935 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:52.935 ===================================================== 00:11:52.935 Controller Capabilities/Features 00:11:52.935 ================================ 00:11:52.935 Vendor ID: 1b36 00:11:52.936 Subsystem Vendor ID: 1af4 00:11:52.936 Serial Number: 12341 00:11:52.936 Model Number: QEMU NVMe Ctrl 00:11:52.936 Firmware Version: 8.0.0 00:11:52.936 Recommended Arb Burst: 6 00:11:52.936 IEEE OUI Identifier: 00 54 52 00:11:52.936 Multi-path I/O 00:11:52.936 May have multiple subsystem ports: No 00:11:52.936 May have multiple controllers: No 00:11:52.936 Associated with SR-IOV VF: No 00:11:52.936 Max Data Transfer Size: 524288 00:11:52.936 Max Number of Namespaces: 256 00:11:52.936 Max Number of I/O Queues: 64 00:11:52.936 NVMe Specification Version (VS): 1.4 00:11:52.936 NVMe Specification Version (Identify): 1.4 00:11:52.936 Maximum Queue Entries: 2048 00:11:52.936 Contiguous Queues Required: Yes 00:11:52.936 Arbitration Mechanisms Supported 00:11:52.936 Weighted Round Robin: Not Supported 00:11:52.936 Vendor Specific: Not Supported 00:11:52.936 Reset Timeout: 7500 ms 00:11:52.936 Doorbell Stride: 4 bytes 00:11:52.936 NVM Subsystem Reset: Not Supported 00:11:52.936 Command Sets Supported 00:11:52.936 NVM Command Set: Supported 00:11:52.936 Boot Partition: Not Supported 00:11:52.936 Memory Page Size Minimum: 4096 bytes 00:11:52.936 Memory Page Size Maximum: 65536 bytes 00:11:52.936 Persistent Memory Region: Not Supported 00:11:52.936 Optional Asynchronous Events Supported 00:11:52.936 Namespace Attribute Notices: Supported 00:11:52.936 Firmware Activation Notices: Not Supported 00:11:52.936 ANA Change Notices: Not Supported 00:11:52.936 PLE Aggregate Log Change Notices: Not Supported 00:11:52.936 LBA Status Info Alert Notices: Not Supported 00:11:52.936 EGE Aggregate Log Change Notices: Not Supported 00:11:52.936 Normal NVM Subsystem Shutdown event: Not Supported 00:11:52.936 Zone Descriptor Change Notices: Not Supported 00:11:52.936 Discovery Log Change Notices: Not Supported 00:11:52.936 Controller Attributes 00:11:52.936 128-bit Host Identifier: Not Supported 00:11:52.936 Non-Operational Permissive Mode: Not Supported 00:11:52.936 NVM Sets: Not Supported 00:11:52.936 Read Recovery Levels: Not Supported 00:11:52.936 Endurance Groups: Not Supported 00:11:52.936 Predictable Latency Mode: Not Supported 00:11:52.936 Traffic Based Keep ALive: Not Supported 00:11:52.936 Namespace Granularity: Not Supported 00:11:52.936 SQ Associations: Not Supported 00:11:52.936 UUID List: Not Supported 00:11:52.936 Multi-Domain Subsystem: Not Supported 00:11:52.936 Fixed Capacity Management: Not Supported 00:11:52.936 Variable Capacity Management: Not Supported 00:11:52.936 Delete Endurance Group: Not Supported 00:11:52.936 Delete NVM Set: Not Supported 00:11:52.936 Extended LBA Formats Supported: Supported 00:11:52.936 Flexible Data Placement Supported: Not Supported 00:11:52.936 00:11:52.936 Controller Memory Buffer Support 00:11:52.936 ================================ 00:11:52.936 Supported: No 00:11:52.936 00:11:52.936 Persistent Memory Region Support 00:11:52.936 ================================ 00:11:52.936 Supported: No 00:11:52.936 00:11:52.936 Admin Command Set Attributes 00:11:52.936 ============================ 00:11:52.936 Security Send/Receive: Not Supported 00:11:52.936 Format NVM: Supported 00:11:52.936 Firmware Activate/Download: Not Supported 00:11:52.936 Namespace Management: Supported 00:11:52.936 Device Self-Test: Not Supported 00:11:52.936 Directives: Supported 00:11:52.936 NVMe-MI: Not Supported 00:11:52.936 Virtualization Management: Not Supported 00:11:52.936 Doorbell Buffer Config: Supported 00:11:52.936 Get LBA Status Capability: Not Supported 00:11:52.936 Command & Feature Lockdown Capability: Not Supported 00:11:52.936 Abort Command Limit: 4 00:11:52.936 Async Event Request Limit: 4 00:11:52.936 Number of Firmware Slots: N/A 00:11:52.936 Firmware Slot 1 Read-Only: N/A 00:11:52.936 Firmware Activation Without Reset: N/A 00:11:52.936 Multiple Update Detection Support: N/A 00:11:52.936 Firmware Update Granularity: No Information Provided 00:11:52.936 Per-Namespace SMART Log: Yes 00:11:52.936 Asymmetric Namespace Access Log Page: Not Supported 00:11:52.936 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:52.936 Command Effects Log Page: Supported 00:11:52.936 Get Log Page Extended Data: Supported 00:11:52.936 Telemetry Log Pages: Not Supported 00:11:52.936 Persistent Event Log Pages: Not Supported 00:11:52.936 Supported Log Pages Log Page: May Support 00:11:52.936 Commands Supported & Effects Log Page: Not Supported 00:11:52.936 Feature Identifiers & Effects Log Page:May Support 00:11:52.936 NVMe-MI Commands & Effects Log Page: May Support 00:11:52.936 Data Area 4 for Telemetry Log: Not Supported 00:11:52.936 Error Log Page Entries Supported: 1 00:11:52.936 Keep Alive: Not Supported 00:11:52.936 00:11:52.936 NVM Command Set Attributes 00:11:52.936 ========================== 00:11:52.936 Submission Queue Entry Size 00:11:52.936 Max: 64 00:11:52.936 Min: 64 00:11:52.936 Completion Queue Entry Size 00:11:52.936 Max: 16 00:11:52.936 Min: 16 00:11:52.936 Number of Namespaces: 256 00:11:52.936 Compare Command: Supported 00:11:52.936 Write Uncorrectable Command: Not Supported 00:11:52.936 Dataset Management Command: Supported 00:11:52.936 Write Zeroes Command: Supported 00:11:52.936 Set Features Save Field: Supported 00:11:52.936 Reservations: Not Supported 00:11:52.936 Timestamp: Supported 00:11:52.936 Copy: Supported 00:11:52.936 Volatile Write Cache: Present 00:11:52.936 Atomic Write Unit (Normal): 1 00:11:52.936 Atomic Write Unit (PFail): 1 00:11:52.936 Atomic Compare & Write Unit: 1 00:11:52.936 Fused Compare & Write: Not Supported 00:11:52.936 Scatter-Gather List 00:11:52.936 SGL Command Set: Supported 00:11:52.936 SGL Keyed: Not Supported 00:11:52.936 SGL Bit Bucket Descriptor: Not Supported 00:11:52.936 SGL Metadata Pointer: Not Supported 00:11:52.936 Oversized SGL: Not Supported 00:11:52.936 SGL Metadata Address: Not Supported 00:11:52.936 SGL Offset: Not Supported 00:11:52.936 Transport SGL Data Block: Not Supported 00:11:52.936 Replay Protected Memory Block: Not Supported 00:11:52.936 00:11:52.936 Firmware Slot Information 00:11:52.936 ========================= 00:11:52.936 Active slot: 1 00:11:52.936 Slot 1 Firmware Revision: 1.0 00:11:52.936 00:11:52.936 00:11:52.936 Commands Supported and Effects 00:11:52.936 ============================== 00:11:52.936 Admin Commands 00:11:52.936 -------------- 00:11:52.936 Delete I/O Submission Queue (00h): Supported 00:11:52.936 Create I/O Submission Queue (01h): Supported 00:11:52.936 Get Log Page (02h): Supported 00:11:52.936 Delete I/O Completion Queue (04h): Supported 00:11:52.936 Create I/O Completion Queue (05h): Supported 00:11:52.936 Identify (06h): Supported 00:11:52.936 Abort (08h): Supported 00:11:52.936 Set Features (09h): Supported 00:11:52.936 Get Features (0Ah): Supported 00:11:52.936 Asynchronous Event Request (0Ch): Supported 00:11:52.936 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:52.936 Directive Send (19h): Supported 00:11:52.936 Directive Receive (1Ah): Supported 00:11:52.937 Virtualization Management (1Ch): Supported 00:11:52.937 Doorbell Buffer Config (7Ch): Supported 00:11:52.937 Format NVM (80h): Supported LBA-Change 00:11:52.937 I/O Commands 00:11:52.937 ------------ 00:11:52.937 Flush (00h): Supported LBA-Change 00:11:52.937 Write (01h): Supported LBA-Change 00:11:52.937 Read (02h): Supported 00:11:52.937 Compare (05h): Supported 00:11:52.937 Write Zeroes (08h): Supported LBA-Change 00:11:52.937 Dataset Management (09h): Supported LBA-Change 00:11:52.937 Unknown (0Ch): Supported 00:11:52.937 Unknown (12h): Supported 00:11:52.937 Copy (19h): Supported LBA-Change 00:11:52.937 Unknown (1Dh): Supported LBA-Change 00:11:52.937 00:11:52.937 Error Log 00:11:52.937 ========= 00:11:52.937 00:11:52.937 Arbitration 00:11:52.937 =========== 00:11:52.937 Arbitration Burst: no limit 00:11:52.937 00:11:52.937 Power Management 00:11:52.937 ================ 00:11:52.937 Number of Power States: 1 00:11:52.937 Current Power State: Power State #0 00:11:52.937 Power State #0: 00:11:52.937 Max Power: 25.00 W 00:11:52.937 Non-Operational State: Operational 00:11:52.937 Entry Latency: 16 microseconds 00:11:52.937 Exit Latency: 4 microseconds 00:11:52.937 Relative Read Throughput: 0 00:11:52.937 Relative Read Latency: 0 00:11:52.937 Relative Write Throughput: 0 00:11:52.937 Relative Write Latency: 0 00:11:52.937 Idle Power: Not Reported 00:11:52.937 Active Power: Not Reported 00:11:52.937 Non-Operational Permissive Mode: Not Supported 00:11:52.937 00:11:52.937 Health Information 00:11:52.937 ================== 00:11:52.937 Critical Warnings: 00:11:52.937 Available Spare Space: OK 00:11:52.937 Temperature: OK 00:11:52.937 Device Reliability: OK 00:11:52.937 Read Only: No 00:11:52.937 Volatile Memory Backup: OK 00:11:52.937 Current Temperature: 323 Kelvin (50 Celsius) 00:11:52.937 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:52.937 Available Spare: 0% 00:11:52.937 Available Spare Threshold: 0% 00:11:52.937 Life Percentage Used: 0% 00:11:52.937 Data Units Read: 1047 00:11:52.937 Data Units Written: 907 00:11:52.937 Host Read Commands: 48186 00:11:52.937 Host Write Commands: 46878 00:11:52.937 Controller Busy Time: 0 minutes 00:11:52.937 Power Cycles: 0 00:11:52.937 Power On Hours: 0 hours 00:11:52.937 Unsafe Shutdowns: 0 00:11:52.937 Unrecoverable Media Errors: 0 00:11:52.937 Lifetime Error Log Entries: 0 00:11:52.937 Warning Temperature Time: 0 minutes 00:11:52.937 Critical Temperature Time: 0 minutes 00:11:52.937 00:11:52.937 Number of Queues 00:11:52.937 ================ 00:11:52.937 Number of I/O Submission Queues: 64 00:11:52.937 Number of I/O Completion Queues: 64 00:11:52.937 00:11:52.937 ZNS Specific Controller Data 00:11:52.937 ============================ 00:11:52.937 Zone Append Size Limit: 0 00:11:52.937 00:11:52.937 00:11:52.937 Active Namespaces 00:11:52.937 ================= 00:11:52.937 Namespace ID:1 00:11:52.937 Error Recovery Timeout: Unlimited 00:11:52.937 Command Set Identifier: NVM (00h) 00:11:52.937 Deallocate: Supported 00:11:52.937 Deallocated/Unwritten Error: Supported 00:11:52.937 Deallocated Read Value: All 0x00 00:11:52.937 Deallocate in Write Zeroes: Not Supported 00:11:52.937 Deallocated Guard Field: 0xFFFF 00:11:52.937 Flush: Supported 00:11:52.937 Reservation: Not Supported 00:11:52.937 Namespace Sharing Capabilities: Private 00:11:52.937 Size (in LBAs): 1310720 (5GiB) 00:11:52.937 Capacity (in LBAs): 1310720 (5GiB) 00:11:52.937 Utilization (in LBAs): 1310720 (5GiB) 00:11:52.937 Thin Provisioning: Not Supported 00:11:52.937 Per-NS Atomic Units: No 00:11:52.937 Maximum Single Source Range Length: 128 00:11:52.937 Maximum Copy Length: 128 00:11:52.937 Maximum Source Range Count: 128 00:11:52.937 NGUID/EUI64 Never Reused: No 00:11:52.937 Namespace Write Protected: No 00:11:52.937 Number of LBA Formats: 8 00:11:52.937 Current LBA Format: LBA Format #04 00:11:52.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:52.937 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:52.937 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:52.937 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:52.937 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:52.937 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:52.937 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:52.937 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:52.937 00:11:52.937 NVM Specific Namespace Data 00:11:52.937 =========================== 00:11:52.937 Logical Block Storage Tag Mask: 0 00:11:52.937 Protection Information Capabilities: 00:11:52.937 16b Guard Protection Information Storage Tag Support: No 00:11:52.937 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:52.937 Storage Tag Check Read Support: No 00:11:52.937 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:52.937 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:52.937 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:53.197 ===================================================== 00:11:53.197 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:53.197 ===================================================== 00:11:53.197 Controller Capabilities/Features 00:11:53.197 ================================ 00:11:53.197 Vendor ID: 1b36 00:11:53.197 Subsystem Vendor ID: 1af4 00:11:53.197 Serial Number: 12342 00:11:53.197 Model Number: QEMU NVMe Ctrl 00:11:53.197 Firmware Version: 8.0.0 00:11:53.197 Recommended Arb Burst: 6 00:11:53.197 IEEE OUI Identifier: 00 54 52 00:11:53.197 Multi-path I/O 00:11:53.197 May have multiple subsystem ports: No 00:11:53.197 May have multiple controllers: No 00:11:53.197 Associated with SR-IOV VF: No 00:11:53.197 Max Data Transfer Size: 524288 00:11:53.197 Max Number of Namespaces: 256 00:11:53.197 Max Number of I/O Queues: 64 00:11:53.197 NVMe Specification Version (VS): 1.4 00:11:53.197 NVMe Specification Version (Identify): 1.4 00:11:53.197 Maximum Queue Entries: 2048 00:11:53.197 Contiguous Queues Required: Yes 00:11:53.197 Arbitration Mechanisms Supported 00:11:53.197 Weighted Round Robin: Not Supported 00:11:53.197 Vendor Specific: Not Supported 00:11:53.197 Reset Timeout: 7500 ms 00:11:53.197 Doorbell Stride: 4 bytes 00:11:53.197 NVM Subsystem Reset: Not Supported 00:11:53.197 Command Sets Supported 00:11:53.197 NVM Command Set: Supported 00:11:53.197 Boot Partition: Not Supported 00:11:53.197 Memory Page Size Minimum: 4096 bytes 00:11:53.198 Memory Page Size Maximum: 65536 bytes 00:11:53.198 Persistent Memory Region: Not Supported 00:11:53.198 Optional Asynchronous Events Supported 00:11:53.198 Namespace Attribute Notices: Supported 00:11:53.198 Firmware Activation Notices: Not Supported 00:11:53.198 ANA Change Notices: Not Supported 00:11:53.198 PLE Aggregate Log Change Notices: Not Supported 00:11:53.198 LBA Status Info Alert Notices: Not Supported 00:11:53.198 EGE Aggregate Log Change Notices: Not Supported 00:11:53.198 Normal NVM Subsystem Shutdown event: Not Supported 00:11:53.198 Zone Descriptor Change Notices: Not Supported 00:11:53.198 Discovery Log Change Notices: Not Supported 00:11:53.198 Controller Attributes 00:11:53.198 128-bit Host Identifier: Not Supported 00:11:53.198 Non-Operational Permissive Mode: Not Supported 00:11:53.198 NVM Sets: Not Supported 00:11:53.198 Read Recovery Levels: Not Supported 00:11:53.198 Endurance Groups: Not Supported 00:11:53.198 Predictable Latency Mode: Not Supported 00:11:53.198 Traffic Based Keep ALive: Not Supported 00:11:53.198 Namespace Granularity: Not Supported 00:11:53.198 SQ Associations: Not Supported 00:11:53.198 UUID List: Not Supported 00:11:53.198 Multi-Domain Subsystem: Not Supported 00:11:53.198 Fixed Capacity Management: Not Supported 00:11:53.198 Variable Capacity Management: Not Supported 00:11:53.198 Delete Endurance Group: Not Supported 00:11:53.198 Delete NVM Set: Not Supported 00:11:53.198 Extended LBA Formats Supported: Supported 00:11:53.198 Flexible Data Placement Supported: Not Supported 00:11:53.198 00:11:53.198 Controller Memory Buffer Support 00:11:53.198 ================================ 00:11:53.198 Supported: No 00:11:53.198 00:11:53.198 Persistent Memory Region Support 00:11:53.198 ================================ 00:11:53.198 Supported: No 00:11:53.198 00:11:53.198 Admin Command Set Attributes 00:11:53.198 ============================ 00:11:53.198 Security Send/Receive: Not Supported 00:11:53.198 Format NVM: Supported 00:11:53.198 Firmware Activate/Download: Not Supported 00:11:53.198 Namespace Management: Supported 00:11:53.198 Device Self-Test: Not Supported 00:11:53.198 Directives: Supported 00:11:53.198 NVMe-MI: Not Supported 00:11:53.198 Virtualization Management: Not Supported 00:11:53.198 Doorbell Buffer Config: Supported 00:11:53.198 Get LBA Status Capability: Not Supported 00:11:53.198 Command & Feature Lockdown Capability: Not Supported 00:11:53.198 Abort Command Limit: 4 00:11:53.198 Async Event Request Limit: 4 00:11:53.198 Number of Firmware Slots: N/A 00:11:53.198 Firmware Slot 1 Read-Only: N/A 00:11:53.198 Firmware Activation Without Reset: N/A 00:11:53.198 Multiple Update Detection Support: N/A 00:11:53.198 Firmware Update Granularity: No Information Provided 00:11:53.198 Per-Namespace SMART Log: Yes 00:11:53.198 Asymmetric Namespace Access Log Page: Not Supported 00:11:53.198 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:53.198 Command Effects Log Page: Supported 00:11:53.198 Get Log Page Extended Data: Supported 00:11:53.198 Telemetry Log Pages: Not Supported 00:11:53.198 Persistent Event Log Pages: Not Supported 00:11:53.198 Supported Log Pages Log Page: May Support 00:11:53.198 Commands Supported & Effects Log Page: Not Supported 00:11:53.198 Feature Identifiers & Effects Log Page:May Support 00:11:53.198 NVMe-MI Commands & Effects Log Page: May Support 00:11:53.198 Data Area 4 for Telemetry Log: Not Supported 00:11:53.198 Error Log Page Entries Supported: 1 00:11:53.198 Keep Alive: Not Supported 00:11:53.198 00:11:53.198 NVM Command Set Attributes 00:11:53.198 ========================== 00:11:53.198 Submission Queue Entry Size 00:11:53.198 Max: 64 00:11:53.198 Min: 64 00:11:53.198 Completion Queue Entry Size 00:11:53.198 Max: 16 00:11:53.198 Min: 16 00:11:53.198 Number of Namespaces: 256 00:11:53.198 Compare Command: Supported 00:11:53.198 Write Uncorrectable Command: Not Supported 00:11:53.198 Dataset Management Command: Supported 00:11:53.198 Write Zeroes Command: Supported 00:11:53.198 Set Features Save Field: Supported 00:11:53.198 Reservations: Not Supported 00:11:53.198 Timestamp: Supported 00:11:53.198 Copy: Supported 00:11:53.198 Volatile Write Cache: Present 00:11:53.198 Atomic Write Unit (Normal): 1 00:11:53.198 Atomic Write Unit (PFail): 1 00:11:53.198 Atomic Compare & Write Unit: 1 00:11:53.198 Fused Compare & Write: Not Supported 00:11:53.198 Scatter-Gather List 00:11:53.198 SGL Command Set: Supported 00:11:53.198 SGL Keyed: Not Supported 00:11:53.198 SGL Bit Bucket Descriptor: Not Supported 00:11:53.198 SGL Metadata Pointer: Not Supported 00:11:53.198 Oversized SGL: Not Supported 00:11:53.198 SGL Metadata Address: Not Supported 00:11:53.198 SGL Offset: Not Supported 00:11:53.198 Transport SGL Data Block: Not Supported 00:11:53.198 Replay Protected Memory Block: Not Supported 00:11:53.198 00:11:53.198 Firmware Slot Information 00:11:53.198 ========================= 00:11:53.198 Active slot: 1 00:11:53.198 Slot 1 Firmware Revision: 1.0 00:11:53.198 00:11:53.198 00:11:53.198 Commands Supported and Effects 00:11:53.198 ============================== 00:11:53.198 Admin Commands 00:11:53.198 -------------- 00:11:53.198 Delete I/O Submission Queue (00h): Supported 00:11:53.198 Create I/O Submission Queue (01h): Supported 00:11:53.198 Get Log Page (02h): Supported 00:11:53.198 Delete I/O Completion Queue (04h): Supported 00:11:53.198 Create I/O Completion Queue (05h): Supported 00:11:53.198 Identify (06h): Supported 00:11:53.198 Abort (08h): Supported 00:11:53.198 Set Features (09h): Supported 00:11:53.198 Get Features (0Ah): Supported 00:11:53.198 Asynchronous Event Request (0Ch): Supported 00:11:53.198 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:53.198 Directive Send (19h): Supported 00:11:53.198 Directive Receive (1Ah): Supported 00:11:53.198 Virtualization Management (1Ch): Supported 00:11:53.198 Doorbell Buffer Config (7Ch): Supported 00:11:53.198 Format NVM (80h): Supported LBA-Change 00:11:53.198 I/O Commands 00:11:53.198 ------------ 00:11:53.198 Flush (00h): Supported LBA-Change 00:11:53.198 Write (01h): Supported LBA-Change 00:11:53.198 Read (02h): Supported 00:11:53.198 Compare (05h): Supported 00:11:53.198 Write Zeroes (08h): Supported LBA-Change 00:11:53.198 Dataset Management (09h): Supported LBA-Change 00:11:53.198 Unknown (0Ch): Supported 00:11:53.198 Unknown (12h): Supported 00:11:53.198 Copy (19h): Supported LBA-Change 00:11:53.198 Unknown (1Dh): Supported LBA-Change 00:11:53.198 00:11:53.198 Error Log 00:11:53.198 ========= 00:11:53.198 00:11:53.198 Arbitration 00:11:53.198 =========== 00:11:53.198 Arbitration Burst: no limit 00:11:53.198 00:11:53.198 Power Management 00:11:53.198 ================ 00:11:53.198 Number of Power States: 1 00:11:53.198 Current Power State: Power State #0 00:11:53.198 Power State #0: 00:11:53.198 Max Power: 25.00 W 00:11:53.198 Non-Operational State: Operational 00:11:53.198 Entry Latency: 16 microseconds 00:11:53.198 Exit Latency: 4 microseconds 00:11:53.198 Relative Read Throughput: 0 00:11:53.198 Relative Read Latency: 0 00:11:53.198 Relative Write Throughput: 0 00:11:53.198 Relative Write Latency: 0 00:11:53.198 Idle Power: Not Reported 00:11:53.198 Active Power: Not Reported 00:11:53.198 Non-Operational Permissive Mode: Not Supported 00:11:53.198 00:11:53.198 Health Information 00:11:53.198 ================== 00:11:53.198 Critical Warnings: 00:11:53.198 Available Spare Space: OK 00:11:53.198 Temperature: OK 00:11:53.198 Device Reliability: OK 00:11:53.198 Read Only: No 00:11:53.198 Volatile Memory Backup: OK 00:11:53.198 Current Temperature: 323 Kelvin (50 Celsius) 00:11:53.198 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:53.198 Available Spare: 0% 00:11:53.198 Available Spare Threshold: 0% 00:11:53.198 Life Percentage Used: 0% 00:11:53.198 Data Units Read: 2188 00:11:53.198 Data Units Written: 1976 00:11:53.198 Host Read Commands: 100103 00:11:53.198 Host Write Commands: 98372 00:11:53.198 Controller Busy Time: 0 minutes 00:11:53.198 Power Cycles: 0 00:11:53.198 Power On Hours: 0 hours 00:11:53.199 Unsafe Shutdowns: 0 00:11:53.199 Unrecoverable Media Errors: 0 00:11:53.199 Lifetime Error Log Entries: 0 00:11:53.199 Warning Temperature Time: 0 minutes 00:11:53.199 Critical Temperature Time: 0 minutes 00:11:53.199 00:11:53.199 Number of Queues 00:11:53.199 ================ 00:11:53.199 Number of I/O Submission Queues: 64 00:11:53.199 Number of I/O Completion Queues: 64 00:11:53.199 00:11:53.199 ZNS Specific Controller Data 00:11:53.199 ============================ 00:11:53.199 Zone Append Size Limit: 0 00:11:53.199 00:11:53.199 00:11:53.199 Active Namespaces 00:11:53.199 ================= 00:11:53.199 Namespace ID:1 00:11:53.199 Error Recovery Timeout: Unlimited 00:11:53.199 Command Set Identifier: NVM (00h) 00:11:53.199 Deallocate: Supported 00:11:53.199 Deallocated/Unwritten Error: Supported 00:11:53.199 Deallocated Read Value: All 0x00 00:11:53.199 Deallocate in Write Zeroes: Not Supported 00:11:53.199 Deallocated Guard Field: 0xFFFF 00:11:53.199 Flush: Supported 00:11:53.199 Reservation: Not Supported 00:11:53.199 Namespace Sharing Capabilities: Private 00:11:53.199 Size (in LBAs): 1048576 (4GiB) 00:11:53.199 Capacity (in LBAs): 1048576 (4GiB) 00:11:53.199 Utilization (in LBAs): 1048576 (4GiB) 00:11:53.199 Thin Provisioning: Not Supported 00:11:53.199 Per-NS Atomic Units: No 00:11:53.199 Maximum Single Source Range Length: 128 00:11:53.199 Maximum Copy Length: 128 00:11:53.199 Maximum Source Range Count: 128 00:11:53.199 NGUID/EUI64 Never Reused: No 00:11:53.199 Namespace Write Protected: No 00:11:53.199 Number of LBA Formats: 8 00:11:53.199 Current LBA Format: LBA Format #04 00:11:53.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:53.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:53.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:53.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:53.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:53.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:53.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:53.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:53.199 00:11:53.199 NVM Specific Namespace Data 00:11:53.199 =========================== 00:11:53.199 Logical Block Storage Tag Mask: 0 00:11:53.199 Protection Information Capabilities: 00:11:53.199 16b Guard Protection Information Storage Tag Support: No 00:11:53.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:53.199 Storage Tag Check Read Support: No 00:11:53.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Namespace ID:2 00:11:53.199 Error Recovery Timeout: Unlimited 00:11:53.199 Command Set Identifier: NVM (00h) 00:11:53.199 Deallocate: Supported 00:11:53.199 Deallocated/Unwritten Error: Supported 00:11:53.199 Deallocated Read Value: All 0x00 00:11:53.199 Deallocate in Write Zeroes: Not Supported 00:11:53.199 Deallocated Guard Field: 0xFFFF 00:11:53.199 Flush: Supported 00:11:53.199 Reservation: Not Supported 00:11:53.199 Namespace Sharing Capabilities: Private 00:11:53.199 Size (in LBAs): 1048576 (4GiB) 00:11:53.199 Capacity (in LBAs): 1048576 (4GiB) 00:11:53.199 Utilization (in LBAs): 1048576 (4GiB) 00:11:53.199 Thin Provisioning: Not Supported 00:11:53.199 Per-NS Atomic Units: No 00:11:53.199 Maximum Single Source Range Length: 128 00:11:53.199 Maximum Copy Length: 128 00:11:53.199 Maximum Source Range Count: 128 00:11:53.199 NGUID/EUI64 Never Reused: No 00:11:53.199 Namespace Write Protected: No 00:11:53.199 Number of LBA Formats: 8 00:11:53.199 Current LBA Format: LBA Format #04 00:11:53.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:53.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:53.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:53.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:53.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:53.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:53.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:53.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:53.199 00:11:53.199 NVM Specific Namespace Data 00:11:53.199 =========================== 00:11:53.199 Logical Block Storage Tag Mask: 0 00:11:53.199 Protection Information Capabilities: 00:11:53.199 16b Guard Protection Information Storage Tag Support: No 00:11:53.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:53.199 Storage Tag Check Read Support: No 00:11:53.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Namespace ID:3 00:11:53.199 Error Recovery Timeout: Unlimited 00:11:53.199 Command Set Identifier: NVM (00h) 00:11:53.199 Deallocate: Supported 00:11:53.199 Deallocated/Unwritten Error: Supported 00:11:53.199 Deallocated Read Value: All 0x00 00:11:53.199 Deallocate in Write Zeroes: Not Supported 00:11:53.199 Deallocated Guard Field: 0xFFFF 00:11:53.199 Flush: Supported 00:11:53.199 Reservation: Not Supported 00:11:53.199 Namespace Sharing Capabilities: Private 00:11:53.199 Size (in LBAs): 1048576 (4GiB) 00:11:53.199 Capacity (in LBAs): 1048576 (4GiB) 00:11:53.199 Utilization (in LBAs): 1048576 (4GiB) 00:11:53.199 Thin Provisioning: Not Supported 00:11:53.199 Per-NS Atomic Units: No 00:11:53.199 Maximum Single Source Range Length: 128 00:11:53.199 Maximum Copy Length: 128 00:11:53.199 Maximum Source Range Count: 128 00:11:53.199 NGUID/EUI64 Never Reused: No 00:11:53.199 Namespace Write Protected: No 00:11:53.199 Number of LBA Formats: 8 00:11:53.199 Current LBA Format: LBA Format #04 00:11:53.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:53.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:53.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:53.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:53.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:53.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:53.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:53.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:53.199 00:11:53.199 NVM Specific Namespace Data 00:11:53.199 =========================== 00:11:53.199 Logical Block Storage Tag Mask: 0 00:11:53.199 Protection Information Capabilities: 00:11:53.199 16b Guard Protection Information Storage Tag Support: No 00:11:53.199 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:53.199 Storage Tag Check Read Support: No 00:11:53.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.199 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:53.199 15:38:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:53.459 ===================================================== 00:11:53.459 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:53.459 ===================================================== 00:11:53.459 Controller Capabilities/Features 00:11:53.459 ================================ 00:11:53.459 Vendor ID: 1b36 00:11:53.459 Subsystem Vendor ID: 1af4 00:11:53.459 Serial Number: 12343 00:11:53.459 Model Number: QEMU NVMe Ctrl 00:11:53.459 Firmware Version: 8.0.0 00:11:53.459 Recommended Arb Burst: 6 00:11:53.459 IEEE OUI Identifier: 00 54 52 00:11:53.459 Multi-path I/O 00:11:53.459 May have multiple subsystem ports: No 00:11:53.459 May have multiple controllers: Yes 00:11:53.459 Associated with SR-IOV VF: No 00:11:53.459 Max Data Transfer Size: 524288 00:11:53.459 Max Number of Namespaces: 256 00:11:53.459 Max Number of I/O Queues: 64 00:11:53.459 NVMe Specification Version (VS): 1.4 00:11:53.459 NVMe Specification Version (Identify): 1.4 00:11:53.459 Maximum Queue Entries: 2048 00:11:53.459 Contiguous Queues Required: Yes 00:11:53.459 Arbitration Mechanisms Supported 00:11:53.459 Weighted Round Robin: Not Supported 00:11:53.459 Vendor Specific: Not Supported 00:11:53.459 Reset Timeout: 7500 ms 00:11:53.459 Doorbell Stride: 4 bytes 00:11:53.459 NVM Subsystem Reset: Not Supported 00:11:53.459 Command Sets Supported 00:11:53.459 NVM Command Set: Supported 00:11:53.459 Boot Partition: Not Supported 00:11:53.459 Memory Page Size Minimum: 4096 bytes 00:11:53.459 Memory Page Size Maximum: 65536 bytes 00:11:53.459 Persistent Memory Region: Not Supported 00:11:53.459 Optional Asynchronous Events Supported 00:11:53.459 Namespace Attribute Notices: Supported 00:11:53.459 Firmware Activation Notices: Not Supported 00:11:53.459 ANA Change Notices: Not Supported 00:11:53.459 PLE Aggregate Log Change Notices: Not Supported 00:11:53.459 LBA Status Info Alert Notices: Not Supported 00:11:53.459 EGE Aggregate Log Change Notices: Not Supported 00:11:53.459 Normal NVM Subsystem Shutdown event: Not Supported 00:11:53.459 Zone Descriptor Change Notices: Not Supported 00:11:53.459 Discovery Log Change Notices: Not Supported 00:11:53.459 Controller Attributes 00:11:53.459 128-bit Host Identifier: Not Supported 00:11:53.459 Non-Operational Permissive Mode: Not Supported 00:11:53.459 NVM Sets: Not Supported 00:11:53.459 Read Recovery Levels: Not Supported 00:11:53.459 Endurance Groups: Supported 00:11:53.459 Predictable Latency Mode: Not Supported 00:11:53.459 Traffic Based Keep ALive: Not Supported 00:11:53.459 Namespace Granularity: Not Supported 00:11:53.459 SQ Associations: Not Supported 00:11:53.459 UUID List: Not Supported 00:11:53.459 Multi-Domain Subsystem: Not Supported 00:11:53.459 Fixed Capacity Management: Not Supported 00:11:53.459 Variable Capacity Management: Not Supported 00:11:53.459 Delete Endurance Group: Not Supported 00:11:53.459 Delete NVM Set: Not Supported 00:11:53.459 Extended LBA Formats Supported: Supported 00:11:53.459 Flexible Data Placement Supported: Supported 00:11:53.460 00:11:53.460 Controller Memory Buffer Support 00:11:53.460 ================================ 00:11:53.460 Supported: No 00:11:53.460 00:11:53.460 Persistent Memory Region Support 00:11:53.460 ================================ 00:11:53.460 Supported: No 00:11:53.460 00:11:53.460 Admin Command Set Attributes 00:11:53.460 ============================ 00:11:53.460 Security Send/Receive: Not Supported 00:11:53.460 Format NVM: Supported 00:11:53.460 Firmware Activate/Download: Not Supported 00:11:53.460 Namespace Management: Supported 00:11:53.460 Device Self-Test: Not Supported 00:11:53.460 Directives: Supported 00:11:53.460 NVMe-MI: Not Supported 00:11:53.460 Virtualization Management: Not Supported 00:11:53.460 Doorbell Buffer Config: Supported 00:11:53.460 Get LBA Status Capability: Not Supported 00:11:53.460 Command & Feature Lockdown Capability: Not Supported 00:11:53.460 Abort Command Limit: 4 00:11:53.460 Async Event Request Limit: 4 00:11:53.460 Number of Firmware Slots: N/A 00:11:53.460 Firmware Slot 1 Read-Only: N/A 00:11:53.460 Firmware Activation Without Reset: N/A 00:11:53.460 Multiple Update Detection Support: N/A 00:11:53.460 Firmware Update Granularity: No Information Provided 00:11:53.460 Per-Namespace SMART Log: Yes 00:11:53.460 Asymmetric Namespace Access Log Page: Not Supported 00:11:53.460 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:53.460 Command Effects Log Page: Supported 00:11:53.460 Get Log Page Extended Data: Supported 00:11:53.460 Telemetry Log Pages: Not Supported 00:11:53.460 Persistent Event Log Pages: Not Supported 00:11:53.460 Supported Log Pages Log Page: May Support 00:11:53.460 Commands Supported & Effects Log Page: Not Supported 00:11:53.460 Feature Identifiers & Effects Log Page:May Support 00:11:53.460 NVMe-MI Commands & Effects Log Page: May Support 00:11:53.460 Data Area 4 for Telemetry Log: Not Supported 00:11:53.460 Error Log Page Entries Supported: 1 00:11:53.460 Keep Alive: Not Supported 00:11:53.460 00:11:53.460 NVM Command Set Attributes 00:11:53.460 ========================== 00:11:53.460 Submission Queue Entry Size 00:11:53.460 Max: 64 00:11:53.460 Min: 64 00:11:53.460 Completion Queue Entry Size 00:11:53.460 Max: 16 00:11:53.460 Min: 16 00:11:53.460 Number of Namespaces: 256 00:11:53.460 Compare Command: Supported 00:11:53.460 Write Uncorrectable Command: Not Supported 00:11:53.460 Dataset Management Command: Supported 00:11:53.460 Write Zeroes Command: Supported 00:11:53.460 Set Features Save Field: Supported 00:11:53.460 Reservations: Not Supported 00:11:53.460 Timestamp: Supported 00:11:53.460 Copy: Supported 00:11:53.460 Volatile Write Cache: Present 00:11:53.460 Atomic Write Unit (Normal): 1 00:11:53.460 Atomic Write Unit (PFail): 1 00:11:53.460 Atomic Compare & Write Unit: 1 00:11:53.460 Fused Compare & Write: Not Supported 00:11:53.460 Scatter-Gather List 00:11:53.460 SGL Command Set: Supported 00:11:53.460 SGL Keyed: Not Supported 00:11:53.460 SGL Bit Bucket Descriptor: Not Supported 00:11:53.460 SGL Metadata Pointer: Not Supported 00:11:53.460 Oversized SGL: Not Supported 00:11:53.460 SGL Metadata Address: Not Supported 00:11:53.460 SGL Offset: Not Supported 00:11:53.460 Transport SGL Data Block: Not Supported 00:11:53.460 Replay Protected Memory Block: Not Supported 00:11:53.460 00:11:53.460 Firmware Slot Information 00:11:53.460 ========================= 00:11:53.460 Active slot: 1 00:11:53.460 Slot 1 Firmware Revision: 1.0 00:11:53.460 00:11:53.460 00:11:53.460 Commands Supported and Effects 00:11:53.460 ============================== 00:11:53.460 Admin Commands 00:11:53.460 -------------- 00:11:53.460 Delete I/O Submission Queue (00h): Supported 00:11:53.460 Create I/O Submission Queue (01h): Supported 00:11:53.460 Get Log Page (02h): Supported 00:11:53.460 Delete I/O Completion Queue (04h): Supported 00:11:53.460 Create I/O Completion Queue (05h): Supported 00:11:53.460 Identify (06h): Supported 00:11:53.460 Abort (08h): Supported 00:11:53.460 Set Features (09h): Supported 00:11:53.460 Get Features (0Ah): Supported 00:11:53.460 Asynchronous Event Request (0Ch): Supported 00:11:53.460 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:53.460 Directive Send (19h): Supported 00:11:53.460 Directive Receive (1Ah): Supported 00:11:53.460 Virtualization Management (1Ch): Supported 00:11:53.460 Doorbell Buffer Config (7Ch): Supported 00:11:53.460 Format NVM (80h): Supported LBA-Change 00:11:53.460 I/O Commands 00:11:53.460 ------------ 00:11:53.460 Flush (00h): Supported LBA-Change 00:11:53.460 Write (01h): Supported LBA-Change 00:11:53.460 Read (02h): Supported 00:11:53.460 Compare (05h): Supported 00:11:53.460 Write Zeroes (08h): Supported LBA-Change 00:11:53.460 Dataset Management (09h): Supported LBA-Change 00:11:53.460 Unknown (0Ch): Supported 00:11:53.460 Unknown (12h): Supported 00:11:53.460 Copy (19h): Supported LBA-Change 00:11:53.460 Unknown (1Dh): Supported LBA-Change 00:11:53.460 00:11:53.460 Error Log 00:11:53.460 ========= 00:11:53.460 00:11:53.460 Arbitration 00:11:53.460 =========== 00:11:53.460 Arbitration Burst: no limit 00:11:53.460 00:11:53.460 Power Management 00:11:53.460 ================ 00:11:53.460 Number of Power States: 1 00:11:53.460 Current Power State: Power State #0 00:11:53.460 Power State #0: 00:11:53.460 Max Power: 25.00 W 00:11:53.460 Non-Operational State: Operational 00:11:53.460 Entry Latency: 16 microseconds 00:11:53.460 Exit Latency: 4 microseconds 00:11:53.460 Relative Read Throughput: 0 00:11:53.460 Relative Read Latency: 0 00:11:53.460 Relative Write Throughput: 0 00:11:53.460 Relative Write Latency: 0 00:11:53.460 Idle Power: Not Reported 00:11:53.460 Active Power: Not Reported 00:11:53.460 Non-Operational Permissive Mode: Not Supported 00:11:53.460 00:11:53.460 Health Information 00:11:53.460 ================== 00:11:53.460 Critical Warnings: 00:11:53.460 Available Spare Space: OK 00:11:53.460 Temperature: OK 00:11:53.460 Device Reliability: OK 00:11:53.460 Read Only: No 00:11:53.460 Volatile Memory Backup: OK 00:11:53.460 Current Temperature: 323 Kelvin (50 Celsius) 00:11:53.460 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:53.460 Available Spare: 0% 00:11:53.460 Available Spare Threshold: 0% 00:11:53.460 Life Percentage Used: 0% 00:11:53.460 Data Units Read: 815 00:11:53.460 Data Units Written: 744 00:11:53.460 Host Read Commands: 34140 00:11:53.460 Host Write Commands: 33564 00:11:53.460 Controller Busy Time: 0 minutes 00:11:53.460 Power Cycles: 0 00:11:53.460 Power On Hours: 0 hours 00:11:53.460 Unsafe Shutdowns: 0 00:11:53.460 Unrecoverable Media Errors: 0 00:11:53.460 Lifetime Error Log Entries: 0 00:11:53.460 Warning Temperature Time: 0 minutes 00:11:53.460 Critical Temperature Time: 0 minutes 00:11:53.460 00:11:53.460 Number of Queues 00:11:53.460 ================ 00:11:53.460 Number of I/O Submission Queues: 64 00:11:53.460 Number of I/O Completion Queues: 64 00:11:53.460 00:11:53.460 ZNS Specific Controller Data 00:11:53.460 ============================ 00:11:53.460 Zone Append Size Limit: 0 00:11:53.460 00:11:53.460 00:11:53.460 Active Namespaces 00:11:53.460 ================= 00:11:53.460 Namespace ID:1 00:11:53.460 Error Recovery Timeout: Unlimited 00:11:53.460 Command Set Identifier: NVM (00h) 00:11:53.460 Deallocate: Supported 00:11:53.460 Deallocated/Unwritten Error: Supported 00:11:53.460 Deallocated Read Value: All 0x00 00:11:53.460 Deallocate in Write Zeroes: Not Supported 00:11:53.460 Deallocated Guard Field: 0xFFFF 00:11:53.460 Flush: Supported 00:11:53.460 Reservation: Not Supported 00:11:53.460 Namespace Sharing Capabilities: Multiple Controllers 00:11:53.460 Size (in LBAs): 262144 (1GiB) 00:11:53.460 Capacity (in LBAs): 262144 (1GiB) 00:11:53.460 Utilization (in LBAs): 262144 (1GiB) 00:11:53.460 Thin Provisioning: Not Supported 00:11:53.460 Per-NS Atomic Units: No 00:11:53.460 Maximum Single Source Range Length: 128 00:11:53.460 Maximum Copy Length: 128 00:11:53.460 Maximum Source Range Count: 128 00:11:53.460 NGUID/EUI64 Never Reused: No 00:11:53.460 Namespace Write Protected: No 00:11:53.460 Endurance group ID: 1 00:11:53.460 Number of LBA Formats: 8 00:11:53.460 Current LBA Format: LBA Format #04 00:11:53.460 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:53.460 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:53.460 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:53.460 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:53.460 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:53.461 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:53.461 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:53.461 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:53.461 00:11:53.461 Get Feature FDP: 00:11:53.461 ================ 00:11:53.461 Enabled: Yes 00:11:53.461 FDP configuration index: 0 00:11:53.461 00:11:53.461 FDP configurations log page 00:11:53.461 =========================== 00:11:53.461 Number of FDP configurations: 1 00:11:53.461 Version: 0 00:11:53.461 Size: 112 00:11:53.461 FDP Configuration Descriptor: 0 00:11:53.461 Descriptor Size: 96 00:11:53.461 Reclaim Group Identifier format: 2 00:11:53.461 FDP Volatile Write Cache: Not Present 00:11:53.461 FDP Configuration: Valid 00:11:53.461 Vendor Specific Size: 0 00:11:53.461 Number of Reclaim Groups: 2 00:11:53.461 Number of Recalim Unit Handles: 8 00:11:53.461 Max Placement Identifiers: 128 00:11:53.461 Number of Namespaces Suppprted: 256 00:11:53.461 Reclaim unit Nominal Size: 6000000 bytes 00:11:53.461 Estimated Reclaim Unit Time Limit: Not Reported 00:11:53.461 RUH Desc #000: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #001: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #002: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #003: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #004: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #005: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #006: RUH Type: Initially Isolated 00:11:53.461 RUH Desc #007: RUH Type: Initially Isolated 00:11:53.461 00:11:53.461 FDP reclaim unit handle usage log page 00:11:53.461 ====================================== 00:11:53.461 Number of Reclaim Unit Handles: 8 00:11:53.461 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:53.461 RUH Usage Desc #001: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #002: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #003: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #004: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #005: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #006: RUH Attributes: Unused 00:11:53.461 RUH Usage Desc #007: RUH Attributes: Unused 00:11:53.461 00:11:53.461 FDP statistics log page 00:11:53.461 ======================= 00:11:53.461 Host bytes with metadata written: 468455424 00:11:53.461 Media bytes with metadata written: 468619264 00:11:53.461 Media bytes erased: 0 00:11:53.461 00:11:53.461 FDP events log page 00:11:53.461 =================== 00:11:53.461 Number of FDP events: 0 00:11:53.461 00:11:53.461 NVM Specific Namespace Data 00:11:53.461 =========================== 00:11:53.461 Logical Block Storage Tag Mask: 0 00:11:53.461 Protection Information Capabilities: 00:11:53.461 16b Guard Protection Information Storage Tag Support: No 00:11:53.461 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:53.461 Storage Tag Check Read Support: No 00:11:53.461 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:53.461 00:11:53.461 real 0m1.603s 00:11:53.461 user 0m0.649s 00:11:53.461 sys 0m0.722s 00:11:53.461 15:38:42 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.461 15:38:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:53.461 ************************************ 00:11:53.461 END TEST nvme_identify 00:11:53.461 ************************************ 00:11:53.461 15:38:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:53.461 15:38:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.720 15:38:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.720 15:38:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.720 ************************************ 00:11:53.720 START TEST nvme_perf 00:11:53.720 ************************************ 00:11:53.720 15:38:42 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:53.720 15:38:42 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:55.099 Initializing NVMe Controllers 00:11:55.099 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:55.099 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:55.099 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:55.099 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:55.099 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:55.099 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:55.099 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:55.099 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:55.099 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:55.099 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:55.099 Initialization complete. Launching workers. 00:11:55.099 ======================================================== 00:11:55.099 Latency(us) 00:11:55.099 Device Information : IOPS MiB/s Average min max 00:11:55.099 PCIE (0000:00:13.0) NSID 1 from core 0: 13147.17 154.07 9732.91 6311.22 40401.97 00:11:55.099 PCIE (0000:00:10.0) NSID 1 from core 0: 13147.17 154.07 9706.48 5749.81 38920.15 00:11:55.099 PCIE (0000:00:11.0) NSID 1 from core 0: 13147.17 154.07 9681.52 5486.88 37000.27 00:11:55.099 PCIE (0000:00:12.0) NSID 1 from core 0: 13147.17 154.07 9654.27 4464.36 35340.08 00:11:55.099 PCIE (0000:00:12.0) NSID 2 from core 0: 13147.17 154.07 9627.55 4174.65 33576.37 00:11:55.099 PCIE (0000:00:12.0) NSID 3 from core 0: 13147.17 154.07 9600.47 3802.15 31804.48 00:11:55.099 ======================================================== 00:11:55.099 Total : 78883.05 924.41 9667.20 3802.15 40401.97 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8281.367us 00:11:55.099 10.00000% : 8698.415us 00:11:55.099 25.00000% : 8996.305us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10247.447us 00:11:55.099 95.00000% : 10604.916us 00:11:55.099 98.00000% : 15371.171us 00:11:55.099 99.00000% : 16920.204us 00:11:55.099 99.50000% : 30980.655us 00:11:55.099 99.90000% : 40036.538us 00:11:55.099 99.99000% : 40513.164us 00:11:55.099 99.99900% : 40513.164us 00:11:55.099 99.99990% : 40513.164us 00:11:55.099 99.99999% : 40513.164us 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8221.789us 00:11:55.099 10.00000% : 8638.836us 00:11:55.099 25.00000% : 8936.727us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10307.025us 00:11:55.099 95.00000% : 10664.495us 00:11:55.099 98.00000% : 15252.015us 00:11:55.099 99.00000% : 16801.047us 00:11:55.099 99.50000% : 29669.935us 00:11:55.099 99.90000% : 38368.349us 00:11:55.099 99.99000% : 39083.287us 00:11:55.099 99.99900% : 39083.287us 00:11:55.099 99.99990% : 39083.287us 00:11:55.099 99.99999% : 39083.287us 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8281.367us 00:11:55.099 10.00000% : 8698.415us 00:11:55.099 25.00000% : 8936.727us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10247.447us 00:11:55.099 95.00000% : 10604.916us 00:11:55.099 98.00000% : 15073.280us 00:11:55.099 99.00000% : 16681.891us 00:11:55.099 99.50000% : 27882.589us 00:11:55.099 99.90000% : 36700.160us 00:11:55.099 99.99000% : 37176.785us 00:11:55.099 99.99900% : 37176.785us 00:11:55.099 99.99990% : 37176.785us 00:11:55.099 99.99999% : 37176.785us 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8162.211us 00:11:55.099 10.00000% : 8638.836us 00:11:55.099 25.00000% : 8936.727us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10247.447us 00:11:55.099 95.00000% : 10664.495us 00:11:55.099 98.00000% : 15192.436us 00:11:55.099 99.00000% : 16681.891us 00:11:55.099 99.50000% : 25976.087us 00:11:55.099 99.90000% : 34793.658us 00:11:55.099 99.99000% : 35508.596us 00:11:55.099 99.99900% : 35508.596us 00:11:55.099 99.99990% : 35508.596us 00:11:55.099 99.99999% : 35508.596us 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8102.633us 00:11:55.099 10.00000% : 8638.836us 00:11:55.099 25.00000% : 8936.727us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10247.447us 00:11:55.099 95.00000% : 10604.916us 00:11:55.099 98.00000% : 15073.280us 00:11:55.099 99.00000% : 16681.891us 00:11:55.099 99.50000% : 24188.742us 00:11:55.099 99.90000% : 33125.469us 00:11:55.099 99.99000% : 33602.095us 00:11:55.099 99.99900% : 33602.095us 00:11:55.099 99.99990% : 33602.095us 00:11:55.099 99.99999% : 33602.095us 00:11:55.099 00:11:55.099 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:55.099 ================================================================================= 00:11:55.099 1.00000% : 8043.055us 00:11:55.099 10.00000% : 8698.415us 00:11:55.099 25.00000% : 8936.727us 00:11:55.099 50.00000% : 9353.775us 00:11:55.099 75.00000% : 9770.822us 00:11:55.099 90.00000% : 10247.447us 00:11:55.099 95.00000% : 10604.916us 00:11:55.099 98.00000% : 15132.858us 00:11:55.099 99.00000% : 16681.891us 00:11:55.099 99.50000% : 22401.396us 00:11:55.099 99.90000% : 31457.280us 00:11:55.099 99.99000% : 31933.905us 00:11:55.099 99.99900% : 31933.905us 00:11:55.099 99.99990% : 31933.905us 00:11:55.099 99.99999% : 31933.905us 00:11:55.099 00:11:55.099 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:55.099 ============================================================================== 00:11:55.099 Range in us Cumulative IO count 00:11:55.099 6285.498 - 6315.287: 0.0076% ( 1) 00:11:55.099 6315.287 - 6345.076: 0.0228% ( 2) 00:11:55.099 6345.076 - 6374.865: 0.0455% ( 3) 00:11:55.099 6374.865 - 6404.655: 0.0607% ( 2) 00:11:55.099 6404.655 - 6434.444: 0.0758% ( 2) 00:11:55.099 6434.444 - 6464.233: 0.1062% ( 4) 00:11:55.099 6464.233 - 6494.022: 0.1214% ( 2) 00:11:55.099 6494.022 - 6523.811: 0.1441% ( 3) 00:11:55.099 6523.811 - 6553.600: 0.1593% ( 2) 00:11:55.099 6553.600 - 6583.389: 0.1745% ( 2) 00:11:55.099 6583.389 - 6613.178: 0.1972% ( 3) 00:11:55.099 6613.178 - 6642.967: 0.2124% ( 2) 00:11:55.099 6642.967 - 6672.756: 0.2275% ( 2) 00:11:55.099 6672.756 - 6702.545: 0.2503% ( 3) 00:11:55.099 6702.545 - 6732.335: 0.2579% ( 1) 00:11:55.099 6732.335 - 6762.124: 0.2731% ( 2) 00:11:55.099 6762.124 - 6791.913: 0.2806% ( 1) 00:11:55.099 6791.913 - 6821.702: 0.3034% ( 3) 00:11:55.099 6821.702 - 6851.491: 0.3262% ( 3) 00:11:55.099 6851.491 - 6881.280: 0.3413% ( 2) 00:11:55.099 6881.280 - 6911.069: 0.3641% ( 3) 00:11:55.099 6911.069 - 6940.858: 0.3868% ( 3) 00:11:55.099 6940.858 - 6970.647: 0.4020% ( 2) 00:11:55.099 6970.647 - 7000.436: 0.4248% ( 3) 00:11:55.099 7000.436 - 7030.225: 0.4399% ( 2) 00:11:55.099 7030.225 - 7060.015: 0.4475% ( 1) 00:11:55.099 7060.015 - 7089.804: 0.4551% ( 1) 00:11:55.099 7089.804 - 7119.593: 0.4703% ( 2) 00:11:55.099 7119.593 - 7149.382: 0.4854% ( 2) 00:11:55.099 7983.476 - 8043.055: 0.5158% ( 4) 00:11:55.099 8043.055 - 8102.633: 0.5461% ( 4) 00:11:55.099 8102.633 - 8162.211: 0.6523% ( 14) 00:11:55.099 8162.211 - 8221.789: 0.8343% ( 24) 00:11:55.099 8221.789 - 8281.367: 1.1302% ( 39) 00:11:55.099 8281.367 - 8340.945: 1.4867% ( 47) 00:11:55.099 8340.945 - 8400.524: 2.1238% ( 84) 00:11:55.099 8400.524 - 8460.102: 3.2008% ( 142) 00:11:55.099 8460.102 - 8519.680: 4.7178% ( 200) 00:11:55.099 8519.680 - 8579.258: 6.5003% ( 235) 00:11:55.099 8579.258 - 8638.836: 8.5786% ( 274) 00:11:55.099 8638.836 - 8698.415: 11.1499% ( 339) 00:11:55.099 8698.415 - 8757.993: 14.2066% ( 403) 00:11:55.099 8757.993 - 8817.571: 17.4226% ( 424) 00:11:55.099 8817.571 - 8877.149: 21.1165% ( 487) 00:11:55.099 8877.149 - 8936.727: 24.9166% ( 501) 00:11:55.099 8936.727 - 8996.305: 28.7773% ( 509) 00:11:55.100 8996.305 - 9055.884: 32.8277% ( 534) 00:11:55.100 9055.884 - 9115.462: 37.0677% ( 559) 00:11:55.100 9115.462 - 9175.040: 41.1939% ( 544) 00:11:55.100 9175.040 - 9234.618: 45.3808% ( 552) 00:11:55.100 9234.618 - 9294.196: 49.4842% ( 541) 00:11:55.100 9294.196 - 9353.775: 53.5953% ( 542) 00:11:55.100 9353.775 - 9413.353: 57.5470% ( 521) 00:11:55.100 9413.353 - 9472.931: 61.2485% ( 488) 00:11:55.100 9472.931 - 9532.509: 64.7148% ( 457) 00:11:55.100 9532.509 - 9592.087: 68.0598% ( 441) 00:11:55.100 9592.087 - 9651.665: 71.1772% ( 411) 00:11:55.100 9651.665 - 9711.244: 74.1277% ( 389) 00:11:55.100 9711.244 - 9770.822: 76.6535% ( 333) 00:11:55.100 9770.822 - 9830.400: 79.0731% ( 319) 00:11:55.100 9830.400 - 9889.978: 81.2121% ( 282) 00:11:55.100 9889.978 - 9949.556: 83.0628% ( 244) 00:11:55.100 9949.556 - 10009.135: 84.9059% ( 243) 00:11:55.100 10009.135 - 10068.713: 86.4988% ( 210) 00:11:55.100 10068.713 - 10128.291: 87.9399% ( 190) 00:11:55.100 10128.291 - 10187.869: 89.3280% ( 183) 00:11:55.100 10187.869 - 10247.447: 90.5871% ( 166) 00:11:55.100 10247.447 - 10307.025: 91.7172% ( 149) 00:11:55.100 10307.025 - 10366.604: 92.7715% ( 139) 00:11:55.100 10366.604 - 10426.182: 93.6817% ( 120) 00:11:55.100 10426.182 - 10485.760: 94.3720% ( 91) 00:11:55.100 10485.760 - 10545.338: 94.7891% ( 55) 00:11:55.100 10545.338 - 10604.916: 95.0698% ( 37) 00:11:55.100 10604.916 - 10664.495: 95.2973% ( 30) 00:11:55.100 10664.495 - 10724.073: 95.4035% ( 14) 00:11:55.100 10724.073 - 10783.651: 95.4642% ( 8) 00:11:55.100 10783.651 - 10843.229: 95.5249% ( 8) 00:11:55.100 10843.229 - 10902.807: 95.5856% ( 8) 00:11:55.100 10902.807 - 10962.385: 95.6311% ( 6) 00:11:55.100 12451.840 - 12511.418: 95.6462% ( 2) 00:11:55.100 12511.418 - 12570.996: 95.6614% ( 2) 00:11:55.100 12570.996 - 12630.575: 95.6842% ( 3) 00:11:55.100 12630.575 - 12690.153: 95.7069% ( 3) 00:11:55.100 12690.153 - 12749.731: 95.7221% ( 2) 00:11:55.100 12749.731 - 12809.309: 95.7448% ( 3) 00:11:55.100 12809.309 - 12868.887: 95.7676% ( 3) 00:11:55.100 12868.887 - 12928.465: 95.7828% ( 2) 00:11:55.100 12928.465 - 12988.044: 95.8055% ( 3) 00:11:55.100 12988.044 - 13047.622: 95.8283% ( 3) 00:11:55.100 13047.622 - 13107.200: 95.8434% ( 2) 00:11:55.100 13107.200 - 13166.778: 95.8662% ( 3) 00:11:55.100 13166.778 - 13226.356: 95.8890% ( 3) 00:11:55.100 13226.356 - 13285.935: 95.9041% ( 2) 00:11:55.100 13285.935 - 13345.513: 95.9421% ( 5) 00:11:55.100 13345.513 - 13405.091: 95.9724% ( 4) 00:11:55.100 13405.091 - 13464.669: 95.9951% ( 3) 00:11:55.100 13464.669 - 13524.247: 96.0255% ( 4) 00:11:55.100 13524.247 - 13583.825: 96.0634% ( 5) 00:11:55.100 13583.825 - 13643.404: 96.1241% ( 8) 00:11:55.100 13643.404 - 13702.982: 96.1999% ( 10) 00:11:55.100 13702.982 - 13762.560: 96.2530% ( 7) 00:11:55.100 13762.560 - 13822.138: 96.2985% ( 6) 00:11:55.100 13822.138 - 13881.716: 96.3668% ( 9) 00:11:55.100 13881.716 - 13941.295: 96.4123% ( 6) 00:11:55.100 13941.295 - 14000.873: 96.4578% ( 6) 00:11:55.100 14000.873 - 14060.451: 96.5033% ( 6) 00:11:55.100 14060.451 - 14120.029: 96.5337% ( 4) 00:11:55.100 14120.029 - 14179.607: 96.5944% ( 8) 00:11:55.100 14179.607 - 14239.185: 96.6247% ( 4) 00:11:55.100 14239.185 - 14298.764: 96.6626% ( 5) 00:11:55.100 14298.764 - 14358.342: 96.7309% ( 9) 00:11:55.100 14358.342 - 14417.920: 96.7688% ( 5) 00:11:55.100 14417.920 - 14477.498: 96.8447% ( 10) 00:11:55.100 14477.498 - 14537.076: 96.9129% ( 9) 00:11:55.100 14537.076 - 14596.655: 96.9888% ( 10) 00:11:55.100 14596.655 - 14656.233: 97.0570% ( 9) 00:11:55.100 14656.233 - 14715.811: 97.1329% ( 10) 00:11:55.100 14715.811 - 14775.389: 97.2163% ( 11) 00:11:55.100 14775.389 - 14834.967: 97.2998% ( 11) 00:11:55.100 14834.967 - 14894.545: 97.3832% ( 11) 00:11:55.100 14894.545 - 14954.124: 97.4818% ( 13) 00:11:55.100 14954.124 - 15013.702: 97.5501% ( 9) 00:11:55.100 15013.702 - 15073.280: 97.6562% ( 14) 00:11:55.100 15073.280 - 15132.858: 97.7473% ( 12) 00:11:55.100 15132.858 - 15192.436: 97.8383% ( 12) 00:11:55.100 15192.436 - 15252.015: 97.9066% ( 9) 00:11:55.100 15252.015 - 15371.171: 98.0734% ( 22) 00:11:55.100 15371.171 - 15490.327: 98.2327% ( 21) 00:11:55.100 15490.327 - 15609.484: 98.3692% ( 18) 00:11:55.100 15609.484 - 15728.640: 98.4754% ( 14) 00:11:55.100 15728.640 - 15847.796: 98.5589% ( 11) 00:11:55.100 15847.796 - 15966.953: 98.6423% ( 11) 00:11:55.100 15966.953 - 16086.109: 98.7106% ( 9) 00:11:55.100 16086.109 - 16205.265: 98.7637% ( 7) 00:11:55.100 16205.265 - 16324.422: 98.8167% ( 7) 00:11:55.100 16324.422 - 16443.578: 98.8623% ( 6) 00:11:55.100 16443.578 - 16562.735: 98.9078% ( 6) 00:11:55.100 16562.735 - 16681.891: 98.9609% ( 7) 00:11:55.100 16681.891 - 16801.047: 98.9912% ( 4) 00:11:55.100 16801.047 - 16920.204: 99.0215% ( 4) 00:11:55.100 16920.204 - 17039.360: 99.0291% ( 1) 00:11:55.100 28240.058 - 28359.215: 99.0367% ( 1) 00:11:55.100 28359.215 - 28478.371: 99.0595% ( 3) 00:11:55.100 28478.371 - 28597.527: 99.0746% ( 2) 00:11:55.100 28597.527 - 28716.684: 99.0974% ( 3) 00:11:55.100 28716.684 - 28835.840: 99.1201% ( 3) 00:11:55.100 28835.840 - 28954.996: 99.1429% ( 3) 00:11:55.100 28954.996 - 29074.153: 99.1657% ( 3) 00:11:55.100 29074.153 - 29193.309: 99.1884% ( 3) 00:11:55.100 29193.309 - 29312.465: 99.2112% ( 3) 00:11:55.100 29312.465 - 29431.622: 99.2339% ( 3) 00:11:55.100 29431.622 - 29550.778: 99.2567% ( 3) 00:11:55.100 29550.778 - 29669.935: 99.2794% ( 3) 00:11:55.100 29669.935 - 29789.091: 99.3022% ( 3) 00:11:55.100 29789.091 - 29908.247: 99.3249% ( 3) 00:11:55.100 29908.247 - 30027.404: 99.3477% ( 3) 00:11:55.100 30027.404 - 30146.560: 99.3704% ( 3) 00:11:55.100 30146.560 - 30265.716: 99.4008% ( 4) 00:11:55.100 30265.716 - 30384.873: 99.4235% ( 3) 00:11:55.100 30384.873 - 30504.029: 99.4463% ( 3) 00:11:55.100 30504.029 - 30742.342: 99.4918% ( 6) 00:11:55.100 30742.342 - 30980.655: 99.5146% ( 3) 00:11:55.100 37653.411 - 37891.724: 99.5525% ( 5) 00:11:55.100 37891.724 - 38130.036: 99.5904% ( 5) 00:11:55.100 38130.036 - 38368.349: 99.6359% ( 6) 00:11:55.100 38368.349 - 38606.662: 99.6814% ( 6) 00:11:55.100 38606.662 - 38844.975: 99.7194% ( 5) 00:11:55.100 38844.975 - 39083.287: 99.7725% ( 7) 00:11:55.100 39083.287 - 39321.600: 99.8104% ( 5) 00:11:55.100 39321.600 - 39559.913: 99.8559% ( 6) 00:11:55.100 39559.913 - 39798.225: 99.8938% ( 5) 00:11:55.100 39798.225 - 40036.538: 99.9393% ( 6) 00:11:55.100 40036.538 - 40274.851: 99.9848% ( 6) 00:11:55.100 40274.851 - 40513.164: 100.0000% ( 2) 00:11:55.100 00:11:55.100 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:55.100 ============================================================================== 00:11:55.100 Range in us Cumulative IO count 00:11:55.100 5749.295 - 5779.084: 0.0228% ( 3) 00:11:55.100 5779.084 - 5808.873: 0.0455% ( 3) 00:11:55.100 5808.873 - 5838.662: 0.0607% ( 2) 00:11:55.100 5838.662 - 5868.451: 0.0758% ( 2) 00:11:55.100 5898.240 - 5928.029: 0.0986% ( 3) 00:11:55.100 5928.029 - 5957.818: 0.1214% ( 3) 00:11:55.100 5957.818 - 5987.607: 0.1289% ( 1) 00:11:55.100 5987.607 - 6017.396: 0.1441% ( 2) 00:11:55.100 6047.185 - 6076.975: 0.1517% ( 1) 00:11:55.100 6076.975 - 6106.764: 0.1669% ( 2) 00:11:55.100 6106.764 - 6136.553: 0.1820% ( 2) 00:11:55.100 6136.553 - 6166.342: 0.1972% ( 2) 00:11:55.100 6166.342 - 6196.131: 0.2124% ( 2) 00:11:55.100 6196.131 - 6225.920: 0.2275% ( 2) 00:11:55.100 6225.920 - 6255.709: 0.2427% ( 2) 00:11:55.100 6255.709 - 6285.498: 0.2579% ( 2) 00:11:55.100 6285.498 - 6315.287: 0.2731% ( 2) 00:11:55.100 6315.287 - 6345.076: 0.2958% ( 3) 00:11:55.100 6345.076 - 6374.865: 0.3110% ( 2) 00:11:55.100 6374.865 - 6404.655: 0.3262% ( 2) 00:11:55.100 6404.655 - 6434.444: 0.3413% ( 2) 00:11:55.100 6434.444 - 6464.233: 0.3641% ( 3) 00:11:55.100 6464.233 - 6494.022: 0.3717% ( 1) 00:11:55.100 6494.022 - 6523.811: 0.3868% ( 2) 00:11:55.100 6523.811 - 6553.600: 0.4020% ( 2) 00:11:55.100 6553.600 - 6583.389: 0.4096% ( 1) 00:11:55.100 6583.389 - 6613.178: 0.4399% ( 4) 00:11:55.100 6613.178 - 6642.967: 0.4475% ( 1) 00:11:55.100 6642.967 - 6672.756: 0.4627% ( 2) 00:11:55.100 6672.756 - 6702.545: 0.4854% ( 3) 00:11:55.100 7864.320 - 7923.898: 0.5234% ( 5) 00:11:55.100 7923.898 - 7983.476: 0.5385% ( 2) 00:11:55.100 7983.476 - 8043.055: 0.5613% ( 3) 00:11:55.100 8043.055 - 8102.633: 0.6978% ( 18) 00:11:55.100 8102.633 - 8162.211: 0.9254% ( 30) 00:11:55.100 8162.211 - 8221.789: 1.2894% ( 48) 00:11:55.100 8221.789 - 8281.367: 1.7976% ( 67) 00:11:55.100 8281.367 - 8340.945: 2.6547% ( 113) 00:11:55.100 8340.945 - 8400.524: 3.7849% ( 149) 00:11:55.100 8400.524 - 8460.102: 5.2867% ( 198) 00:11:55.100 8460.102 - 8519.680: 6.9933% ( 225) 00:11:55.100 8519.680 - 8579.258: 9.0944% ( 277) 00:11:55.100 8579.258 - 8638.836: 11.5215% ( 320) 00:11:55.100 8638.836 - 8698.415: 14.1839% ( 351) 00:11:55.100 8698.415 - 8757.993: 17.0737% ( 381) 00:11:55.100 8757.993 - 8817.571: 20.1305% ( 403) 00:11:55.100 8817.571 - 8877.149: 23.2555% ( 412) 00:11:55.101 8877.149 - 8936.727: 26.5928% ( 440) 00:11:55.101 8936.727 - 8996.305: 30.0895% ( 461) 00:11:55.101 8996.305 - 9055.884: 33.6393% ( 468) 00:11:55.101 9055.884 - 9115.462: 37.2269% ( 473) 00:11:55.101 9115.462 - 9175.040: 40.9056% ( 485) 00:11:55.101 9175.040 - 9234.618: 44.5237% ( 477) 00:11:55.101 9234.618 - 9294.196: 48.3010% ( 498) 00:11:55.101 9294.196 - 9353.775: 51.9721% ( 484) 00:11:55.101 9353.775 - 9413.353: 55.9694% ( 527) 00:11:55.101 9413.353 - 9472.931: 59.7618% ( 500) 00:11:55.101 9472.931 - 9532.509: 63.3874% ( 478) 00:11:55.101 9532.509 - 9592.087: 67.0282% ( 480) 00:11:55.101 9592.087 - 9651.665: 70.3049% ( 432) 00:11:55.101 9651.665 - 9711.244: 73.2175% ( 384) 00:11:55.101 9711.244 - 9770.822: 75.8874% ( 352) 00:11:55.101 9770.822 - 9830.400: 78.3070% ( 319) 00:11:55.101 9830.400 - 9889.978: 80.3777% ( 273) 00:11:55.101 9889.978 - 9949.556: 82.2133% ( 242) 00:11:55.101 9949.556 - 10009.135: 83.9730% ( 232) 00:11:55.101 10009.135 - 10068.713: 85.6189% ( 217) 00:11:55.101 10068.713 - 10128.291: 87.0221% ( 185) 00:11:55.101 10128.291 - 10187.869: 88.2737% ( 165) 00:11:55.101 10187.869 - 10247.447: 89.4417% ( 154) 00:11:55.101 10247.447 - 10307.025: 90.5947% ( 152) 00:11:55.101 10307.025 - 10366.604: 91.6945% ( 145) 00:11:55.101 10366.604 - 10426.182: 92.6350% ( 124) 00:11:55.101 10426.182 - 10485.760: 93.4314% ( 105) 00:11:55.101 10485.760 - 10545.338: 94.1748% ( 98) 00:11:55.101 10545.338 - 10604.916: 94.7588% ( 77) 00:11:55.101 10604.916 - 10664.495: 95.1001% ( 45) 00:11:55.101 10664.495 - 10724.073: 95.3201% ( 29) 00:11:55.101 10724.073 - 10783.651: 95.4945% ( 23) 00:11:55.101 10783.651 - 10843.229: 95.5400% ( 6) 00:11:55.101 10843.229 - 10902.807: 95.5704% ( 4) 00:11:55.101 10962.385 - 11021.964: 95.5931% ( 3) 00:11:55.101 11021.964 - 11081.542: 95.6083% ( 2) 00:11:55.101 11081.542 - 11141.120: 95.6311% ( 3) 00:11:55.101 11915.636 - 11975.215: 95.6387% ( 1) 00:11:55.101 11975.215 - 12034.793: 95.6538% ( 2) 00:11:55.101 12034.793 - 12094.371: 95.6690% ( 2) 00:11:55.101 12094.371 - 12153.949: 95.6842% ( 2) 00:11:55.101 12153.949 - 12213.527: 95.7069% ( 3) 00:11:55.101 12213.527 - 12273.105: 95.7297% ( 3) 00:11:55.101 12273.105 - 12332.684: 95.7448% ( 2) 00:11:55.101 12332.684 - 12392.262: 95.7600% ( 2) 00:11:55.101 12392.262 - 12451.840: 95.7752% ( 2) 00:11:55.101 12451.840 - 12511.418: 95.7828% ( 1) 00:11:55.101 12511.418 - 12570.996: 95.8055% ( 3) 00:11:55.101 12570.996 - 12630.575: 95.8131% ( 1) 00:11:55.101 12630.575 - 12690.153: 95.8283% ( 2) 00:11:55.101 12690.153 - 12749.731: 95.8510% ( 3) 00:11:55.101 12749.731 - 12809.309: 95.8586% ( 1) 00:11:55.101 12809.309 - 12868.887: 95.8890% ( 4) 00:11:55.101 12868.887 - 12928.465: 95.9193% ( 4) 00:11:55.101 12928.465 - 12988.044: 95.9421% ( 3) 00:11:55.101 12988.044 - 13047.622: 95.9648% ( 3) 00:11:55.101 13047.622 - 13107.200: 96.0027% ( 5) 00:11:55.101 13107.200 - 13166.778: 96.0558% ( 7) 00:11:55.101 13166.778 - 13226.356: 96.0862% ( 4) 00:11:55.101 13226.356 - 13285.935: 96.1165% ( 4) 00:11:55.101 13285.935 - 13345.513: 96.1468% ( 4) 00:11:55.101 13345.513 - 13405.091: 96.1696% ( 3) 00:11:55.101 13405.091 - 13464.669: 96.2227% ( 7) 00:11:55.101 13464.669 - 13524.247: 96.2454% ( 3) 00:11:55.101 13524.247 - 13583.825: 96.3061% ( 8) 00:11:55.101 13583.825 - 13643.404: 96.3516% ( 6) 00:11:55.101 13643.404 - 13702.982: 96.3896% ( 5) 00:11:55.101 13702.982 - 13762.560: 96.4351% ( 6) 00:11:55.101 13762.560 - 13822.138: 96.4882% ( 7) 00:11:55.101 13822.138 - 13881.716: 96.5185% ( 4) 00:11:55.101 13881.716 - 13941.295: 96.5488% ( 4) 00:11:55.101 13941.295 - 14000.873: 96.5868% ( 5) 00:11:55.101 14000.873 - 14060.451: 96.6323% ( 6) 00:11:55.101 14060.451 - 14120.029: 96.6854% ( 7) 00:11:55.101 14120.029 - 14179.607: 96.7688% ( 11) 00:11:55.101 14179.607 - 14239.185: 96.8295% ( 8) 00:11:55.101 14239.185 - 14298.764: 96.8902% ( 8) 00:11:55.101 14298.764 - 14358.342: 96.9736% ( 11) 00:11:55.101 14358.342 - 14417.920: 97.0570% ( 11) 00:11:55.101 14417.920 - 14477.498: 97.1329% ( 10) 00:11:55.101 14477.498 - 14537.076: 97.2087% ( 10) 00:11:55.101 14537.076 - 14596.655: 97.2542% ( 6) 00:11:55.101 14596.655 - 14656.233: 97.3377% ( 11) 00:11:55.101 14656.233 - 14715.811: 97.4059% ( 9) 00:11:55.101 14715.811 - 14775.389: 97.4894% ( 11) 00:11:55.101 14775.389 - 14834.967: 97.5728% ( 11) 00:11:55.101 14834.967 - 14894.545: 97.6259% ( 7) 00:11:55.101 14894.545 - 14954.124: 97.7093% ( 11) 00:11:55.101 14954.124 - 15013.702: 97.7397% ( 4) 00:11:55.101 15013.702 - 15073.280: 97.8231% ( 11) 00:11:55.101 15073.280 - 15132.858: 97.9141% ( 12) 00:11:55.101 15132.858 - 15192.436: 97.9900% ( 10) 00:11:55.101 15192.436 - 15252.015: 98.0507% ( 8) 00:11:55.101 15252.015 - 15371.171: 98.2175% ( 22) 00:11:55.101 15371.171 - 15490.327: 98.3541% ( 18) 00:11:55.101 15490.327 - 15609.484: 98.4527% ( 13) 00:11:55.101 15609.484 - 15728.640: 98.5285% ( 10) 00:11:55.101 15728.640 - 15847.796: 98.5892% ( 8) 00:11:55.101 15847.796 - 15966.953: 98.6499% ( 8) 00:11:55.101 15966.953 - 16086.109: 98.7181% ( 9) 00:11:55.101 16086.109 - 16205.265: 98.7788% ( 8) 00:11:55.101 16205.265 - 16324.422: 98.8395% ( 8) 00:11:55.101 16324.422 - 16443.578: 98.8774% ( 5) 00:11:55.101 16443.578 - 16562.735: 98.9078% ( 4) 00:11:55.101 16562.735 - 16681.891: 98.9609% ( 7) 00:11:55.101 16681.891 - 16801.047: 99.0064% ( 6) 00:11:55.101 16801.047 - 16920.204: 99.0215% ( 2) 00:11:55.101 16920.204 - 17039.360: 99.0291% ( 1) 00:11:55.101 26691.025 - 26810.182: 99.0519% ( 3) 00:11:55.101 26810.182 - 26929.338: 99.0671% ( 2) 00:11:55.101 26929.338 - 27048.495: 99.0898% ( 3) 00:11:55.101 27048.495 - 27167.651: 99.1126% ( 3) 00:11:55.101 27167.651 - 27286.807: 99.1353% ( 3) 00:11:55.101 27286.807 - 27405.964: 99.1505% ( 2) 00:11:55.101 27405.964 - 27525.120: 99.1732% ( 3) 00:11:55.101 27525.120 - 27644.276: 99.1960% ( 3) 00:11:55.101 27644.276 - 27763.433: 99.2112% ( 2) 00:11:55.101 27763.433 - 27882.589: 99.2263% ( 2) 00:11:55.101 27882.589 - 28001.745: 99.2491% ( 3) 00:11:55.101 28001.745 - 28120.902: 99.2643% ( 2) 00:11:55.101 28120.902 - 28240.058: 99.2870% ( 3) 00:11:55.101 28240.058 - 28359.215: 99.3022% ( 2) 00:11:55.101 28359.215 - 28478.371: 99.3249% ( 3) 00:11:55.101 28478.371 - 28597.527: 99.3401% ( 2) 00:11:55.101 28597.527 - 28716.684: 99.3553% ( 2) 00:11:55.101 28716.684 - 28835.840: 99.3856% ( 4) 00:11:55.101 28835.840 - 28954.996: 99.4008% ( 2) 00:11:55.101 28954.996 - 29074.153: 99.4235% ( 3) 00:11:55.101 29074.153 - 29193.309: 99.4387% ( 2) 00:11:55.101 29193.309 - 29312.465: 99.4539% ( 2) 00:11:55.101 29312.465 - 29431.622: 99.4766% ( 3) 00:11:55.101 29431.622 - 29550.778: 99.4994% ( 3) 00:11:55.101 29550.778 - 29669.935: 99.5146% ( 2) 00:11:55.101 35746.909 - 35985.222: 99.5297% ( 2) 00:11:55.101 35985.222 - 36223.535: 99.5601% ( 4) 00:11:55.101 36223.535 - 36461.847: 99.5980% ( 5) 00:11:55.101 36461.847 - 36700.160: 99.6359% ( 5) 00:11:55.101 36700.160 - 36938.473: 99.6814% ( 6) 00:11:55.101 36938.473 - 37176.785: 99.7118% ( 4) 00:11:55.101 37176.785 - 37415.098: 99.7497% ( 5) 00:11:55.101 37415.098 - 37653.411: 99.7952% ( 6) 00:11:55.101 37653.411 - 37891.724: 99.8331% ( 5) 00:11:55.101 37891.724 - 38130.036: 99.8711% ( 5) 00:11:55.101 38130.036 - 38368.349: 99.9014% ( 4) 00:11:55.101 38368.349 - 38606.662: 99.9469% ( 6) 00:11:55.101 38606.662 - 38844.975: 99.9848% ( 5) 00:11:55.101 38844.975 - 39083.287: 100.0000% ( 2) 00:11:55.101 00:11:55.101 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:55.101 ============================================================================== 00:11:55.101 Range in us Cumulative IO count 00:11:55.101 5481.193 - 5510.982: 0.0228% ( 3) 00:11:55.101 5510.982 - 5540.771: 0.0455% ( 3) 00:11:55.101 5540.771 - 5570.560: 0.0607% ( 2) 00:11:55.101 5570.560 - 5600.349: 0.0758% ( 2) 00:11:55.101 5600.349 - 5630.138: 0.0986% ( 3) 00:11:55.101 5630.138 - 5659.927: 0.1214% ( 3) 00:11:55.101 5659.927 - 5689.716: 0.1365% ( 2) 00:11:55.101 5689.716 - 5719.505: 0.1593% ( 3) 00:11:55.101 5719.505 - 5749.295: 0.1745% ( 2) 00:11:55.101 5749.295 - 5779.084: 0.1972% ( 3) 00:11:55.101 5779.084 - 5808.873: 0.2200% ( 3) 00:11:55.101 5808.873 - 5838.662: 0.2351% ( 2) 00:11:55.101 5838.662 - 5868.451: 0.2579% ( 3) 00:11:55.101 5868.451 - 5898.240: 0.2731% ( 2) 00:11:55.101 5898.240 - 5928.029: 0.2958% ( 3) 00:11:55.101 5928.029 - 5957.818: 0.3186% ( 3) 00:11:55.101 5957.818 - 5987.607: 0.3413% ( 3) 00:11:55.101 5987.607 - 6017.396: 0.3565% ( 2) 00:11:55.101 6017.396 - 6047.185: 0.3792% ( 3) 00:11:55.101 6047.185 - 6076.975: 0.3944% ( 2) 00:11:55.101 6076.975 - 6106.764: 0.4096% ( 2) 00:11:55.101 6136.553 - 6166.342: 0.4248% ( 2) 00:11:55.101 6166.342 - 6196.131: 0.4399% ( 2) 00:11:55.101 6196.131 - 6225.920: 0.4627% ( 3) 00:11:55.101 6225.920 - 6255.709: 0.4779% ( 2) 00:11:55.101 6255.709 - 6285.498: 0.4854% ( 1) 00:11:55.101 7923.898 - 7983.476: 0.4930% ( 1) 00:11:55.101 7983.476 - 8043.055: 0.5234% ( 4) 00:11:55.101 8043.055 - 8102.633: 0.5461% ( 3) 00:11:55.101 8102.633 - 8162.211: 0.6296% ( 11) 00:11:55.101 8162.211 - 8221.789: 0.7661% ( 18) 00:11:55.101 8221.789 - 8281.367: 1.0771% ( 41) 00:11:55.101 8281.367 - 8340.945: 1.6156% ( 71) 00:11:55.101 8340.945 - 8400.524: 2.3817% ( 101) 00:11:55.101 8400.524 - 8460.102: 3.7166% ( 176) 00:11:55.101 8460.102 - 8519.680: 5.2033% ( 196) 00:11:55.101 8519.680 - 8579.258: 7.0692% ( 246) 00:11:55.101 8579.258 - 8638.836: 9.1095% ( 269) 00:11:55.102 8638.836 - 8698.415: 11.7112% ( 343) 00:11:55.102 8698.415 - 8757.993: 14.7148% ( 396) 00:11:55.102 8757.993 - 8817.571: 18.0143% ( 435) 00:11:55.102 8817.571 - 8877.149: 21.6171% ( 475) 00:11:55.102 8877.149 - 8936.727: 25.3186% ( 488) 00:11:55.102 8936.727 - 8996.305: 29.2703% ( 521) 00:11:55.102 8996.305 - 9055.884: 33.3965% ( 544) 00:11:55.102 9055.884 - 9115.462: 37.5986% ( 554) 00:11:55.102 9115.462 - 9175.040: 41.7248% ( 544) 00:11:55.102 9175.040 - 9234.618: 45.8510% ( 544) 00:11:55.102 9234.618 - 9294.196: 49.8483% ( 527) 00:11:55.102 9294.196 - 9353.775: 53.7470% ( 514) 00:11:55.102 9353.775 - 9413.353: 57.4333% ( 486) 00:11:55.102 9413.353 - 9472.931: 60.9678% ( 466) 00:11:55.102 9472.931 - 9532.509: 64.5479% ( 472) 00:11:55.102 9532.509 - 9592.087: 67.9763% ( 452) 00:11:55.102 9592.087 - 9651.665: 71.0862% ( 410) 00:11:55.102 9651.665 - 9711.244: 73.9912% ( 383) 00:11:55.102 9711.244 - 9770.822: 76.5777% ( 341) 00:11:55.102 9770.822 - 9830.400: 78.9062% ( 307) 00:11:55.102 9830.400 - 9889.978: 81.0528% ( 283) 00:11:55.102 9889.978 - 9949.556: 82.9490% ( 250) 00:11:55.102 9949.556 - 10009.135: 84.6632% ( 226) 00:11:55.102 10009.135 - 10068.713: 86.2637% ( 211) 00:11:55.102 10068.713 - 10128.291: 87.7655% ( 198) 00:11:55.102 10128.291 - 10187.869: 89.1914% ( 188) 00:11:55.102 10187.869 - 10247.447: 90.5112% ( 174) 00:11:55.102 10247.447 - 10307.025: 91.6869% ( 155) 00:11:55.102 10307.025 - 10366.604: 92.7109% ( 135) 00:11:55.102 10366.604 - 10426.182: 93.5831% ( 115) 00:11:55.102 10426.182 - 10485.760: 94.2127% ( 83) 00:11:55.102 10485.760 - 10545.338: 94.6981% ( 64) 00:11:55.102 10545.338 - 10604.916: 95.0622% ( 48) 00:11:55.102 10604.916 - 10664.495: 95.3049% ( 32) 00:11:55.102 10664.495 - 10724.073: 95.4566% ( 20) 00:11:55.102 10724.073 - 10783.651: 95.5325% ( 10) 00:11:55.102 10783.651 - 10843.229: 95.5628% ( 4) 00:11:55.102 10843.229 - 10902.807: 95.5856% ( 3) 00:11:55.102 10902.807 - 10962.385: 95.6159% ( 4) 00:11:55.102 10962.385 - 11021.964: 95.6311% ( 2) 00:11:55.102 11558.167 - 11617.745: 95.6538% ( 3) 00:11:55.102 11617.745 - 11677.324: 95.6690% ( 2) 00:11:55.102 11677.324 - 11736.902: 95.6842% ( 2) 00:11:55.102 11736.902 - 11796.480: 95.6993% ( 2) 00:11:55.102 11796.480 - 11856.058: 95.7221% ( 3) 00:11:55.102 11856.058 - 11915.636: 95.7448% ( 3) 00:11:55.102 11915.636 - 11975.215: 95.7676% ( 3) 00:11:55.102 11975.215 - 12034.793: 95.7828% ( 2) 00:11:55.102 12034.793 - 12094.371: 95.8055% ( 3) 00:11:55.102 12094.371 - 12153.949: 95.8207% ( 2) 00:11:55.102 12153.949 - 12213.527: 95.8359% ( 2) 00:11:55.102 12213.527 - 12273.105: 95.8586% ( 3) 00:11:55.102 12273.105 - 12332.684: 95.8662% ( 1) 00:11:55.102 12332.684 - 12392.262: 95.8890% ( 3) 00:11:55.102 12392.262 - 12451.840: 95.9041% ( 2) 00:11:55.102 12451.840 - 12511.418: 95.9269% ( 3) 00:11:55.102 12511.418 - 12570.996: 95.9421% ( 2) 00:11:55.102 12570.996 - 12630.575: 95.9572% ( 2) 00:11:55.102 12630.575 - 12690.153: 95.9800% ( 3) 00:11:55.102 12690.153 - 12749.731: 95.9951% ( 2) 00:11:55.102 12749.731 - 12809.309: 96.0179% ( 3) 00:11:55.102 12809.309 - 12868.887: 96.0331% ( 2) 00:11:55.102 12868.887 - 12928.465: 96.0482% ( 2) 00:11:55.102 12928.465 - 12988.044: 96.0862% ( 5) 00:11:55.102 12988.044 - 13047.622: 96.1393% ( 7) 00:11:55.102 13047.622 - 13107.200: 96.1848% ( 6) 00:11:55.102 13107.200 - 13166.778: 96.2227% ( 5) 00:11:55.102 13166.778 - 13226.356: 96.2454% ( 3) 00:11:55.102 13226.356 - 13285.935: 96.2758% ( 4) 00:11:55.102 13285.935 - 13345.513: 96.3061% ( 4) 00:11:55.102 13345.513 - 13405.091: 96.3441% ( 5) 00:11:55.102 13405.091 - 13464.669: 96.3592% ( 2) 00:11:55.102 13464.669 - 13524.247: 96.3896% ( 4) 00:11:55.102 13524.247 - 13583.825: 96.4275% ( 5) 00:11:55.102 13583.825 - 13643.404: 96.4806% ( 7) 00:11:55.102 13643.404 - 13702.982: 96.5033% ( 3) 00:11:55.102 13702.982 - 13762.560: 96.5564% ( 7) 00:11:55.102 13762.560 - 13822.138: 96.5944% ( 5) 00:11:55.102 13822.138 - 13881.716: 96.6323% ( 5) 00:11:55.102 13881.716 - 13941.295: 96.6778% ( 6) 00:11:55.102 13941.295 - 14000.873: 96.7081% ( 4) 00:11:55.102 14000.873 - 14060.451: 96.7612% ( 7) 00:11:55.102 14060.451 - 14120.029: 96.8219% ( 8) 00:11:55.102 14120.029 - 14179.607: 96.8598% ( 5) 00:11:55.102 14179.607 - 14239.185: 96.9205% ( 8) 00:11:55.102 14239.185 - 14298.764: 96.9812% ( 8) 00:11:55.102 14298.764 - 14358.342: 97.0495% ( 9) 00:11:55.102 14358.342 - 14417.920: 97.1329% ( 11) 00:11:55.102 14417.920 - 14477.498: 97.2012% ( 9) 00:11:55.102 14477.498 - 14537.076: 97.2846% ( 11) 00:11:55.102 14537.076 - 14596.655: 97.3832% ( 13) 00:11:55.102 14596.655 - 14656.233: 97.4590% ( 10) 00:11:55.102 14656.233 - 14715.811: 97.5349% ( 10) 00:11:55.102 14715.811 - 14775.389: 97.6107% ( 10) 00:11:55.102 14775.389 - 14834.967: 97.7093% ( 13) 00:11:55.102 14834.967 - 14894.545: 97.8004% ( 12) 00:11:55.102 14894.545 - 14954.124: 97.8686% ( 9) 00:11:55.102 14954.124 - 15013.702: 97.9521% ( 11) 00:11:55.102 15013.702 - 15073.280: 98.0583% ( 14) 00:11:55.102 15073.280 - 15132.858: 98.1341% ( 10) 00:11:55.102 15132.858 - 15192.436: 98.2251% ( 12) 00:11:55.102 15192.436 - 15252.015: 98.2630% ( 5) 00:11:55.102 15252.015 - 15371.171: 98.3541% ( 12) 00:11:55.102 15371.171 - 15490.327: 98.4299% ( 10) 00:11:55.102 15490.327 - 15609.484: 98.5058% ( 10) 00:11:55.102 15609.484 - 15728.640: 98.5740% ( 9) 00:11:55.102 15728.640 - 15847.796: 98.6650% ( 12) 00:11:55.102 15847.796 - 15966.953: 98.7485% ( 11) 00:11:55.102 15966.953 - 16086.109: 98.8167% ( 9) 00:11:55.102 16086.109 - 16205.265: 98.8774% ( 8) 00:11:55.102 16205.265 - 16324.422: 98.9305% ( 7) 00:11:55.102 16324.422 - 16443.578: 98.9760% ( 6) 00:11:55.102 16443.578 - 16562.735: 98.9988% ( 3) 00:11:55.102 16562.735 - 16681.891: 99.0291% ( 4) 00:11:55.102 25141.993 - 25261.149: 99.0519% ( 3) 00:11:55.102 25261.149 - 25380.305: 99.0671% ( 2) 00:11:55.102 25380.305 - 25499.462: 99.0898% ( 3) 00:11:55.102 25499.462 - 25618.618: 99.1126% ( 3) 00:11:55.102 25618.618 - 25737.775: 99.1353% ( 3) 00:11:55.102 25737.775 - 25856.931: 99.1505% ( 2) 00:11:55.102 25856.931 - 25976.087: 99.1732% ( 3) 00:11:55.102 25976.087 - 26095.244: 99.1960% ( 3) 00:11:55.102 26095.244 - 26214.400: 99.2188% ( 3) 00:11:55.102 26214.400 - 26333.556: 99.2415% ( 3) 00:11:55.102 26333.556 - 26452.713: 99.2567% ( 2) 00:11:55.102 26452.713 - 26571.869: 99.2794% ( 3) 00:11:55.102 26571.869 - 26691.025: 99.3022% ( 3) 00:11:55.102 26691.025 - 26810.182: 99.3249% ( 3) 00:11:55.102 26810.182 - 26929.338: 99.3477% ( 3) 00:11:55.102 26929.338 - 27048.495: 99.3704% ( 3) 00:11:55.102 27048.495 - 27167.651: 99.3932% ( 3) 00:11:55.102 27167.651 - 27286.807: 99.4160% ( 3) 00:11:55.102 27286.807 - 27405.964: 99.4311% ( 2) 00:11:55.102 27405.964 - 27525.120: 99.4539% ( 3) 00:11:55.102 27525.120 - 27644.276: 99.4766% ( 3) 00:11:55.102 27644.276 - 27763.433: 99.4994% ( 3) 00:11:55.102 27763.433 - 27882.589: 99.5146% ( 2) 00:11:55.102 34317.033 - 34555.345: 99.5449% ( 4) 00:11:55.102 34555.345 - 34793.658: 99.5828% ( 5) 00:11:55.102 34793.658 - 35031.971: 99.6283% ( 6) 00:11:55.102 35031.971 - 35270.284: 99.6738% ( 6) 00:11:55.102 35270.284 - 35508.596: 99.7118% ( 5) 00:11:55.102 35508.596 - 35746.909: 99.7573% ( 6) 00:11:55.102 35746.909 - 35985.222: 99.8028% ( 6) 00:11:55.102 35985.222 - 36223.535: 99.8483% ( 6) 00:11:55.102 36223.535 - 36461.847: 99.8938% ( 6) 00:11:55.102 36461.847 - 36700.160: 99.9393% ( 6) 00:11:55.102 36700.160 - 36938.473: 99.9848% ( 6) 00:11:55.102 36938.473 - 37176.785: 100.0000% ( 2) 00:11:55.102 00:11:55.102 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:55.102 ============================================================================== 00:11:55.102 Range in us Cumulative IO count 00:11:55.102 4438.575 - 4468.364: 0.0152% ( 2) 00:11:55.102 4468.364 - 4498.153: 0.0303% ( 2) 00:11:55.102 4498.153 - 4527.942: 0.0531% ( 3) 00:11:55.102 4527.942 - 4557.731: 0.0758% ( 3) 00:11:55.102 4557.731 - 4587.520: 0.0910% ( 2) 00:11:55.102 4587.520 - 4617.309: 0.1138% ( 3) 00:11:55.102 4617.309 - 4647.098: 0.1289% ( 2) 00:11:55.102 4647.098 - 4676.887: 0.1517% ( 3) 00:11:55.102 4676.887 - 4706.676: 0.1669% ( 2) 00:11:55.102 4706.676 - 4736.465: 0.1896% ( 3) 00:11:55.102 4736.465 - 4766.255: 0.2048% ( 2) 00:11:55.102 4766.255 - 4796.044: 0.2200% ( 2) 00:11:55.102 4796.044 - 4825.833: 0.2351% ( 2) 00:11:55.102 4825.833 - 4855.622: 0.2503% ( 2) 00:11:55.102 4855.622 - 4885.411: 0.2655% ( 2) 00:11:55.102 4885.411 - 4915.200: 0.2882% ( 3) 00:11:55.102 4915.200 - 4944.989: 0.3034% ( 2) 00:11:55.102 4944.989 - 4974.778: 0.3186% ( 2) 00:11:55.102 4974.778 - 5004.567: 0.3337% ( 2) 00:11:55.102 5004.567 - 5034.356: 0.3565% ( 3) 00:11:55.102 5034.356 - 5064.145: 0.3717% ( 2) 00:11:55.102 5064.145 - 5093.935: 0.3868% ( 2) 00:11:55.102 5093.935 - 5123.724: 0.4020% ( 2) 00:11:55.102 5123.724 - 5153.513: 0.4172% ( 2) 00:11:55.102 5153.513 - 5183.302: 0.4248% ( 1) 00:11:55.102 5183.302 - 5213.091: 0.4475% ( 3) 00:11:55.102 5213.091 - 5242.880: 0.4627% ( 2) 00:11:55.102 5242.880 - 5272.669: 0.4703% ( 1) 00:11:55.102 5272.669 - 5302.458: 0.4779% ( 1) 00:11:55.103 5302.458 - 5332.247: 0.4854% ( 1) 00:11:55.103 7536.640 - 7566.429: 0.4930% ( 1) 00:11:55.103 7566.429 - 7596.218: 0.5082% ( 2) 00:11:55.103 7596.218 - 7626.007: 0.5234% ( 2) 00:11:55.103 7626.007 - 7685.585: 0.5613% ( 5) 00:11:55.103 7685.585 - 7745.164: 0.6144% ( 7) 00:11:55.103 7745.164 - 7804.742: 0.6447% ( 4) 00:11:55.103 7804.742 - 7864.320: 0.6751% ( 4) 00:11:55.103 7864.320 - 7923.898: 0.7206% ( 6) 00:11:55.103 7923.898 - 7983.476: 0.7737% ( 7) 00:11:55.103 7983.476 - 8043.055: 0.8495% ( 10) 00:11:55.103 8043.055 - 8102.633: 0.9633% ( 15) 00:11:55.103 8102.633 - 8162.211: 1.1453% ( 24) 00:11:55.103 8162.211 - 8221.789: 1.3729% ( 30) 00:11:55.103 8221.789 - 8281.367: 1.8204% ( 59) 00:11:55.103 8281.367 - 8340.945: 2.4575% ( 84) 00:11:55.103 8340.945 - 8400.524: 3.3298% ( 115) 00:11:55.103 8400.524 - 8460.102: 4.4600% ( 149) 00:11:55.103 8460.102 - 8519.680: 5.9921% ( 202) 00:11:55.103 8519.680 - 8579.258: 8.0173% ( 267) 00:11:55.103 8579.258 - 8638.836: 10.2169% ( 290) 00:11:55.103 8638.836 - 8698.415: 12.5607% ( 309) 00:11:55.103 8698.415 - 8757.993: 15.3368% ( 366) 00:11:55.103 8757.993 - 8817.571: 18.4466% ( 410) 00:11:55.103 8817.571 - 8877.149: 21.9736% ( 465) 00:11:55.103 8877.149 - 8936.727: 25.6978% ( 491) 00:11:55.103 8936.727 - 8996.305: 29.5434% ( 507) 00:11:55.103 8996.305 - 9055.884: 33.6241% ( 538) 00:11:55.103 9055.884 - 9115.462: 37.5834% ( 522) 00:11:55.103 9115.462 - 9175.040: 41.5655% ( 525) 00:11:55.103 9175.040 - 9234.618: 45.5780% ( 529) 00:11:55.103 9234.618 - 9294.196: 49.4691% ( 513) 00:11:55.103 9294.196 - 9353.775: 53.3222% ( 508) 00:11:55.103 9353.775 - 9413.353: 57.1147% ( 500) 00:11:55.103 9413.353 - 9472.931: 60.8389% ( 491) 00:11:55.103 9472.931 - 9532.509: 64.4797% ( 480) 00:11:55.103 9532.509 - 9592.087: 67.9688% ( 460) 00:11:55.103 9592.087 - 9651.665: 71.1772% ( 423) 00:11:55.103 9651.665 - 9711.244: 74.0064% ( 373) 00:11:55.103 9711.244 - 9770.822: 76.6914% ( 354) 00:11:55.103 9770.822 - 9830.400: 79.0428% ( 310) 00:11:55.103 9830.400 - 9889.978: 81.1893% ( 283) 00:11:55.103 9889.978 - 9949.556: 83.0476% ( 245) 00:11:55.103 9949.556 - 10009.135: 84.7467% ( 224) 00:11:55.103 10009.135 - 10068.713: 86.1726% ( 188) 00:11:55.103 10068.713 - 10128.291: 87.4772% ( 172) 00:11:55.103 10128.291 - 10187.869: 88.7970% ( 174) 00:11:55.103 10187.869 - 10247.447: 90.0941% ( 171) 00:11:55.103 10247.447 - 10307.025: 91.2470% ( 152) 00:11:55.103 10307.025 - 10366.604: 92.3240% ( 142) 00:11:55.103 10366.604 - 10426.182: 93.2570% ( 123) 00:11:55.103 10426.182 - 10485.760: 93.9851% ( 96) 00:11:55.103 10485.760 - 10545.338: 94.5161% ( 70) 00:11:55.103 10545.338 - 10604.916: 94.9029% ( 51) 00:11:55.103 10604.916 - 10664.495: 95.2063% ( 40) 00:11:55.103 10664.495 - 10724.073: 95.4187% ( 28) 00:11:55.103 10724.073 - 10783.651: 95.5325% ( 15) 00:11:55.103 10783.651 - 10843.229: 95.6235% ( 12) 00:11:55.103 10843.229 - 10902.807: 95.6842% ( 8) 00:11:55.103 10902.807 - 10962.385: 95.7221% ( 5) 00:11:55.103 10962.385 - 11021.964: 95.7448% ( 3) 00:11:55.103 11021.964 - 11081.542: 95.7676% ( 3) 00:11:55.103 11081.542 - 11141.120: 95.7828% ( 2) 00:11:55.103 11141.120 - 11200.698: 95.7979% ( 2) 00:11:55.103 11200.698 - 11260.276: 95.8131% ( 2) 00:11:55.103 11260.276 - 11319.855: 95.8283% ( 2) 00:11:55.103 11319.855 - 11379.433: 95.8434% ( 2) 00:11:55.103 11379.433 - 11439.011: 95.8586% ( 2) 00:11:55.103 11439.011 - 11498.589: 95.8738% ( 2) 00:11:55.103 11498.589 - 11558.167: 95.8890% ( 2) 00:11:55.103 11558.167 - 11617.745: 95.9041% ( 2) 00:11:55.103 11617.745 - 11677.324: 95.9193% ( 2) 00:11:55.103 11677.324 - 11736.902: 95.9345% ( 2) 00:11:55.103 11736.902 - 11796.480: 95.9572% ( 3) 00:11:55.103 11796.480 - 11856.058: 95.9724% ( 2) 00:11:55.103 11856.058 - 11915.636: 95.9876% ( 2) 00:11:55.103 11915.636 - 11975.215: 96.0027% ( 2) 00:11:55.103 11975.215 - 12034.793: 96.0255% ( 3) 00:11:55.103 12034.793 - 12094.371: 96.0407% ( 2) 00:11:55.103 12094.371 - 12153.949: 96.0558% ( 2) 00:11:55.103 12153.949 - 12213.527: 96.0710% ( 2) 00:11:55.103 12213.527 - 12273.105: 96.0938% ( 3) 00:11:55.103 12273.105 - 12332.684: 96.1089% ( 2) 00:11:55.103 12332.684 - 12392.262: 96.1165% ( 1) 00:11:55.103 12988.044 - 13047.622: 96.1241% ( 1) 00:11:55.103 13047.622 - 13107.200: 96.1317% ( 1) 00:11:55.103 13107.200 - 13166.778: 96.1468% ( 2) 00:11:55.103 13166.778 - 13226.356: 96.1620% ( 2) 00:11:55.103 13226.356 - 13285.935: 96.1772% ( 2) 00:11:55.103 13285.935 - 13345.513: 96.1848% ( 1) 00:11:55.103 13345.513 - 13405.091: 96.1999% ( 2) 00:11:55.103 13405.091 - 13464.669: 96.2151% ( 2) 00:11:55.103 13464.669 - 13524.247: 96.2303% ( 2) 00:11:55.103 13524.247 - 13583.825: 96.2530% ( 3) 00:11:55.103 13583.825 - 13643.404: 96.2834% ( 4) 00:11:55.103 13643.404 - 13702.982: 96.3137% ( 4) 00:11:55.103 13702.982 - 13762.560: 96.3516% ( 5) 00:11:55.103 13762.560 - 13822.138: 96.4047% ( 7) 00:11:55.103 13822.138 - 13881.716: 96.4882% ( 11) 00:11:55.103 13881.716 - 13941.295: 96.5261% ( 5) 00:11:55.103 13941.295 - 14000.873: 96.5944% ( 9) 00:11:55.103 14000.873 - 14060.451: 96.6475% ( 7) 00:11:55.103 14060.451 - 14120.029: 96.7081% ( 8) 00:11:55.103 14120.029 - 14179.607: 96.7688% ( 8) 00:11:55.103 14179.607 - 14239.185: 96.8219% ( 7) 00:11:55.103 14239.185 - 14298.764: 96.8750% ( 7) 00:11:55.103 14298.764 - 14358.342: 96.9433% ( 9) 00:11:55.103 14358.342 - 14417.920: 97.0115% ( 9) 00:11:55.103 14417.920 - 14477.498: 97.0798% ( 9) 00:11:55.103 14477.498 - 14537.076: 97.1708% ( 12) 00:11:55.103 14537.076 - 14596.655: 97.2542% ( 11) 00:11:55.103 14596.655 - 14656.233: 97.3453% ( 12) 00:11:55.103 14656.233 - 14715.811: 97.4439% ( 13) 00:11:55.103 14715.811 - 14775.389: 97.5197% ( 10) 00:11:55.103 14775.389 - 14834.967: 97.5956% ( 10) 00:11:55.103 14834.967 - 14894.545: 97.6942% ( 13) 00:11:55.103 14894.545 - 14954.124: 97.7700% ( 10) 00:11:55.103 14954.124 - 15013.702: 97.8610% ( 12) 00:11:55.103 15013.702 - 15073.280: 97.9217% ( 8) 00:11:55.103 15073.280 - 15132.858: 97.9824% ( 8) 00:11:55.103 15132.858 - 15192.436: 98.0431% ( 8) 00:11:55.103 15192.436 - 15252.015: 98.1038% ( 8) 00:11:55.103 15252.015 - 15371.171: 98.1948% ( 12) 00:11:55.103 15371.171 - 15490.327: 98.2934% ( 13) 00:11:55.103 15490.327 - 15609.484: 98.3996% ( 14) 00:11:55.103 15609.484 - 15728.640: 98.5058% ( 14) 00:11:55.103 15728.640 - 15847.796: 98.6044% ( 13) 00:11:55.103 15847.796 - 15966.953: 98.7030% ( 13) 00:11:55.103 15966.953 - 16086.109: 98.7940% ( 12) 00:11:55.103 16086.109 - 16205.265: 98.8471% ( 7) 00:11:55.103 16205.265 - 16324.422: 98.9002% ( 7) 00:11:55.103 16324.422 - 16443.578: 98.9457% ( 6) 00:11:55.103 16443.578 - 16562.735: 98.9912% ( 6) 00:11:55.103 16562.735 - 16681.891: 99.0215% ( 4) 00:11:55.103 16681.891 - 16801.047: 99.0291% ( 1) 00:11:55.103 23354.647 - 23473.804: 99.0519% ( 3) 00:11:55.103 23473.804 - 23592.960: 99.0746% ( 3) 00:11:55.103 23592.960 - 23712.116: 99.0898% ( 2) 00:11:55.103 23712.116 - 23831.273: 99.1126% ( 3) 00:11:55.103 23831.273 - 23950.429: 99.1353% ( 3) 00:11:55.103 23950.429 - 24069.585: 99.1581% ( 3) 00:11:55.103 24069.585 - 24188.742: 99.1808% ( 3) 00:11:55.103 24188.742 - 24307.898: 99.2036% ( 3) 00:11:55.103 24307.898 - 24427.055: 99.2263% ( 3) 00:11:55.103 24427.055 - 24546.211: 99.2491% ( 3) 00:11:55.103 24546.211 - 24665.367: 99.2643% ( 2) 00:11:55.103 24665.367 - 24784.524: 99.2870% ( 3) 00:11:55.103 24784.524 - 24903.680: 99.3098% ( 3) 00:11:55.103 24903.680 - 25022.836: 99.3325% ( 3) 00:11:55.103 25022.836 - 25141.993: 99.3553% ( 3) 00:11:55.103 25141.993 - 25261.149: 99.3704% ( 2) 00:11:55.103 25261.149 - 25380.305: 99.3932% ( 3) 00:11:55.103 25380.305 - 25499.462: 99.4160% ( 3) 00:11:55.103 25499.462 - 25618.618: 99.4387% ( 3) 00:11:55.103 25618.618 - 25737.775: 99.4615% ( 3) 00:11:55.103 25737.775 - 25856.931: 99.4842% ( 3) 00:11:55.103 25856.931 - 25976.087: 99.5070% ( 3) 00:11:55.103 25976.087 - 26095.244: 99.5146% ( 1) 00:11:55.103 32410.531 - 32648.844: 99.5221% ( 1) 00:11:55.103 32648.844 - 32887.156: 99.5677% ( 6) 00:11:55.103 32887.156 - 33125.469: 99.6132% ( 6) 00:11:55.103 33125.469 - 33363.782: 99.6511% ( 5) 00:11:55.103 33363.782 - 33602.095: 99.6966% ( 6) 00:11:55.103 33602.095 - 33840.407: 99.7421% ( 6) 00:11:55.103 33840.407 - 34078.720: 99.7800% ( 5) 00:11:55.103 34078.720 - 34317.033: 99.8255% ( 6) 00:11:55.103 34317.033 - 34555.345: 99.8711% ( 6) 00:11:55.103 34555.345 - 34793.658: 99.9090% ( 5) 00:11:55.103 34793.658 - 35031.971: 99.9469% ( 5) 00:11:55.103 35031.971 - 35270.284: 99.9848% ( 5) 00:11:55.103 35270.284 - 35508.596: 100.0000% ( 2) 00:11:55.103 00:11:55.103 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:55.104 ============================================================================== 00:11:55.104 Range in us Cumulative IO count 00:11:55.104 4170.473 - 4200.262: 0.0379% ( 5) 00:11:55.104 4230.051 - 4259.840: 0.0531% ( 2) 00:11:55.104 4259.840 - 4289.629: 0.0683% ( 2) 00:11:55.104 4289.629 - 4319.418: 0.0834% ( 2) 00:11:55.104 4319.418 - 4349.207: 0.1062% ( 3) 00:11:55.104 4349.207 - 4378.996: 0.1289% ( 3) 00:11:55.104 4378.996 - 4408.785: 0.1517% ( 3) 00:11:55.104 4408.785 - 4438.575: 0.1669% ( 2) 00:11:55.104 4438.575 - 4468.364: 0.1896% ( 3) 00:11:55.104 4468.364 - 4498.153: 0.2124% ( 3) 00:11:55.104 4498.153 - 4527.942: 0.2275% ( 2) 00:11:55.104 4527.942 - 4557.731: 0.2503% ( 3) 00:11:55.104 4557.731 - 4587.520: 0.2655% ( 2) 00:11:55.104 4587.520 - 4617.309: 0.2806% ( 2) 00:11:55.104 4617.309 - 4647.098: 0.3034% ( 3) 00:11:55.104 4647.098 - 4676.887: 0.3186% ( 2) 00:11:55.104 4676.887 - 4706.676: 0.3413% ( 3) 00:11:55.104 4706.676 - 4736.465: 0.3641% ( 3) 00:11:55.104 4736.465 - 4766.255: 0.3792% ( 2) 00:11:55.104 4766.255 - 4796.044: 0.4020% ( 3) 00:11:55.104 4796.044 - 4825.833: 0.4172% ( 2) 00:11:55.104 4825.833 - 4855.622: 0.4399% ( 3) 00:11:55.104 4855.622 - 4885.411: 0.4551% ( 2) 00:11:55.104 4885.411 - 4915.200: 0.4779% ( 3) 00:11:55.104 4915.200 - 4944.989: 0.4854% ( 1) 00:11:55.104 7149.382 - 7179.171: 0.5158% ( 4) 00:11:55.104 7179.171 - 7208.960: 0.5309% ( 2) 00:11:55.104 7208.960 - 7238.749: 0.5537% ( 3) 00:11:55.104 7268.538 - 7298.327: 0.5765% ( 3) 00:11:55.104 7298.327 - 7328.116: 0.5992% ( 3) 00:11:55.104 7328.116 - 7357.905: 0.6144% ( 2) 00:11:55.104 7357.905 - 7387.695: 0.6220% ( 1) 00:11:55.104 7387.695 - 7417.484: 0.6447% ( 3) 00:11:55.104 7417.484 - 7447.273: 0.6599% ( 2) 00:11:55.104 7447.273 - 7477.062: 0.6826% ( 3) 00:11:55.104 7477.062 - 7506.851: 0.6978% ( 2) 00:11:55.104 7506.851 - 7536.640: 0.7206% ( 3) 00:11:55.104 7536.640 - 7566.429: 0.7357% ( 2) 00:11:55.104 7566.429 - 7596.218: 0.7585% ( 3) 00:11:55.104 7596.218 - 7626.007: 0.7737% ( 2) 00:11:55.104 7626.007 - 7685.585: 0.8192% ( 6) 00:11:55.104 7685.585 - 7745.164: 0.8571% ( 5) 00:11:55.104 7745.164 - 7804.742: 0.8950% ( 5) 00:11:55.104 7804.742 - 7864.320: 0.9329% ( 5) 00:11:55.104 7864.320 - 7923.898: 0.9633% ( 4) 00:11:55.104 7923.898 - 7983.476: 0.9709% ( 1) 00:11:55.104 7983.476 - 8043.055: 0.9936% ( 3) 00:11:55.104 8043.055 - 8102.633: 1.0695% ( 10) 00:11:55.104 8102.633 - 8162.211: 1.2288% ( 21) 00:11:55.104 8162.211 - 8221.789: 1.4108% ( 24) 00:11:55.104 8221.789 - 8281.367: 1.7673% ( 47) 00:11:55.104 8281.367 - 8340.945: 2.3362% ( 75) 00:11:55.104 8340.945 - 8400.524: 3.2084% ( 115) 00:11:55.104 8400.524 - 8460.102: 4.3917% ( 156) 00:11:55.104 8460.102 - 8519.680: 5.9011% ( 199) 00:11:55.104 8519.680 - 8579.258: 7.8580% ( 258) 00:11:55.104 8579.258 - 8638.836: 10.1562% ( 303) 00:11:55.104 8638.836 - 8698.415: 12.6820% ( 333) 00:11:55.104 8698.415 - 8757.993: 15.5416% ( 377) 00:11:55.104 8757.993 - 8817.571: 18.6514% ( 410) 00:11:55.104 8817.571 - 8877.149: 22.0191% ( 444) 00:11:55.104 8877.149 - 8936.727: 25.6978% ( 485) 00:11:55.104 8936.727 - 8996.305: 29.5055% ( 502) 00:11:55.104 8996.305 - 9055.884: 33.3434% ( 506) 00:11:55.104 9055.884 - 9115.462: 37.3711% ( 531) 00:11:55.104 9115.462 - 9175.040: 41.3001% ( 518) 00:11:55.104 9175.040 - 9234.618: 45.3125% ( 529) 00:11:55.104 9234.618 - 9294.196: 49.3477% ( 532) 00:11:55.104 9294.196 - 9353.775: 53.1781% ( 505) 00:11:55.104 9353.775 - 9413.353: 57.0692% ( 513) 00:11:55.104 9413.353 - 9472.931: 60.9754% ( 515) 00:11:55.104 9472.931 - 9532.509: 64.6465% ( 484) 00:11:55.104 9532.509 - 9592.087: 68.0370% ( 447) 00:11:55.104 9592.087 - 9651.665: 71.1696% ( 413) 00:11:55.104 9651.665 - 9711.244: 74.0898% ( 385) 00:11:55.104 9711.244 - 9770.822: 76.6459% ( 337) 00:11:55.104 9770.822 - 9830.400: 78.9897% ( 309) 00:11:55.104 9830.400 - 9889.978: 81.1666% ( 287) 00:11:55.104 9889.978 - 9949.556: 83.1235% ( 258) 00:11:55.104 9949.556 - 10009.135: 84.9059% ( 235) 00:11:55.104 10009.135 - 10068.713: 86.4609% ( 205) 00:11:55.104 10068.713 - 10128.291: 87.8489% ( 183) 00:11:55.104 10128.291 - 10187.869: 89.2370% ( 183) 00:11:55.104 10187.869 - 10247.447: 90.4885% ( 165) 00:11:55.104 10247.447 - 10307.025: 91.6110% ( 148) 00:11:55.104 10307.025 - 10366.604: 92.6123% ( 132) 00:11:55.104 10366.604 - 10426.182: 93.5528% ( 124) 00:11:55.104 10426.182 - 10485.760: 94.3113% ( 100) 00:11:55.104 10485.760 - 10545.338: 94.8346% ( 69) 00:11:55.104 10545.338 - 10604.916: 95.1684% ( 44) 00:11:55.104 10604.916 - 10664.495: 95.4718% ( 40) 00:11:55.104 10664.495 - 10724.073: 95.6159% ( 19) 00:11:55.104 10724.073 - 10783.651: 95.7145% ( 13) 00:11:55.104 10783.651 - 10843.229: 95.7904% ( 10) 00:11:55.104 10843.229 - 10902.807: 95.8738% ( 11) 00:11:55.104 10902.807 - 10962.385: 95.8890% ( 2) 00:11:55.104 10962.385 - 11021.964: 95.9117% ( 3) 00:11:55.104 11021.964 - 11081.542: 95.9193% ( 1) 00:11:55.104 11081.542 - 11141.120: 95.9421% ( 3) 00:11:55.104 11141.120 - 11200.698: 95.9572% ( 2) 00:11:55.104 11200.698 - 11260.276: 95.9724% ( 2) 00:11:55.104 11260.276 - 11319.855: 95.9876% ( 2) 00:11:55.104 11319.855 - 11379.433: 96.0027% ( 2) 00:11:55.104 11379.433 - 11439.011: 96.0255% ( 3) 00:11:55.104 11439.011 - 11498.589: 96.0331% ( 1) 00:11:55.104 11498.589 - 11558.167: 96.0482% ( 2) 00:11:55.104 11558.167 - 11617.745: 96.0634% ( 2) 00:11:55.104 11617.745 - 11677.324: 96.0786% ( 2) 00:11:55.104 11677.324 - 11736.902: 96.0938% ( 2) 00:11:55.104 11736.902 - 11796.480: 96.1165% ( 3) 00:11:55.104 13107.200 - 13166.778: 96.1241% ( 1) 00:11:55.104 13166.778 - 13226.356: 96.1393% ( 2) 00:11:55.104 13226.356 - 13285.935: 96.1544% ( 2) 00:11:55.104 13285.935 - 13345.513: 96.1696% ( 2) 00:11:55.104 13345.513 - 13405.091: 96.1848% ( 2) 00:11:55.104 13405.091 - 13464.669: 96.2151% ( 4) 00:11:55.104 13464.669 - 13524.247: 96.2606% ( 6) 00:11:55.104 13524.247 - 13583.825: 96.2910% ( 4) 00:11:55.104 13583.825 - 13643.404: 96.3213% ( 4) 00:11:55.104 13643.404 - 13702.982: 96.3668% ( 6) 00:11:55.104 13702.982 - 13762.560: 96.4199% ( 7) 00:11:55.104 13762.560 - 13822.138: 96.4654% ( 6) 00:11:55.104 13822.138 - 13881.716: 96.5413% ( 10) 00:11:55.104 13881.716 - 13941.295: 96.5868% ( 6) 00:11:55.104 13941.295 - 14000.873: 96.6475% ( 8) 00:11:55.104 14000.873 - 14060.451: 96.7081% ( 8) 00:11:55.104 14060.451 - 14120.029: 96.7536% ( 6) 00:11:55.104 14120.029 - 14179.607: 96.8067% ( 7) 00:11:55.104 14179.607 - 14239.185: 96.8750% ( 9) 00:11:55.104 14239.185 - 14298.764: 96.9584% ( 11) 00:11:55.104 14298.764 - 14358.342: 97.0419% ( 11) 00:11:55.104 14358.342 - 14417.920: 97.1329% ( 12) 00:11:55.104 14417.920 - 14477.498: 97.2163% ( 11) 00:11:55.104 14477.498 - 14537.076: 97.2998% ( 11) 00:11:55.104 14537.076 - 14596.655: 97.3984% ( 13) 00:11:55.104 14596.655 - 14656.233: 97.4742% ( 10) 00:11:55.104 14656.233 - 14715.811: 97.5652% ( 12) 00:11:55.104 14715.811 - 14775.389: 97.6562% ( 12) 00:11:55.104 14775.389 - 14834.967: 97.7397% ( 11) 00:11:55.104 14834.967 - 14894.545: 97.8231% ( 11) 00:11:55.104 14894.545 - 14954.124: 97.8990% ( 10) 00:11:55.104 14954.124 - 15013.702: 97.9672% ( 9) 00:11:55.104 15013.702 - 15073.280: 98.0203% ( 7) 00:11:55.104 15073.280 - 15132.858: 98.0734% ( 7) 00:11:55.104 15132.858 - 15192.436: 98.1265% ( 7) 00:11:55.104 15192.436 - 15252.015: 98.1644% ( 5) 00:11:55.104 15252.015 - 15371.171: 98.2479% ( 11) 00:11:55.104 15371.171 - 15490.327: 98.3465% ( 13) 00:11:55.104 15490.327 - 15609.484: 98.4451% ( 13) 00:11:55.104 15609.484 - 15728.640: 98.5513% ( 14) 00:11:55.104 15728.640 - 15847.796: 98.6499% ( 13) 00:11:55.104 15847.796 - 15966.953: 98.7409% ( 12) 00:11:55.104 15966.953 - 16086.109: 98.8243% ( 11) 00:11:55.104 16086.109 - 16205.265: 98.9002% ( 10) 00:11:55.104 16205.265 - 16324.422: 98.9457% ( 6) 00:11:55.104 16324.422 - 16443.578: 98.9684% ( 3) 00:11:55.104 16443.578 - 16562.735: 98.9912% ( 3) 00:11:55.104 16562.735 - 16681.891: 99.0140% ( 3) 00:11:55.104 16681.891 - 16801.047: 99.0291% ( 2) 00:11:55.104 21567.302 - 21686.458: 99.0443% ( 2) 00:11:55.104 21686.458 - 21805.615: 99.0746% ( 4) 00:11:55.104 21805.615 - 21924.771: 99.0898% ( 2) 00:11:55.104 21924.771 - 22043.927: 99.1126% ( 3) 00:11:55.104 22043.927 - 22163.084: 99.1353% ( 3) 00:11:55.104 22163.084 - 22282.240: 99.1581% ( 3) 00:11:55.104 22282.240 - 22401.396: 99.1808% ( 3) 00:11:55.104 22401.396 - 22520.553: 99.2036% ( 3) 00:11:55.104 22520.553 - 22639.709: 99.2263% ( 3) 00:11:55.104 22639.709 - 22758.865: 99.2491% ( 3) 00:11:55.104 22758.865 - 22878.022: 99.2643% ( 2) 00:11:55.105 22878.022 - 22997.178: 99.2870% ( 3) 00:11:55.105 22997.178 - 23116.335: 99.3098% ( 3) 00:11:55.105 23116.335 - 23235.491: 99.3249% ( 2) 00:11:55.105 23235.491 - 23354.647: 99.3477% ( 3) 00:11:55.105 23354.647 - 23473.804: 99.3704% ( 3) 00:11:55.105 23473.804 - 23592.960: 99.3932% ( 3) 00:11:55.105 23592.960 - 23712.116: 99.4160% ( 3) 00:11:55.105 23712.116 - 23831.273: 99.4387% ( 3) 00:11:55.105 23831.273 - 23950.429: 99.4615% ( 3) 00:11:55.105 23950.429 - 24069.585: 99.4842% ( 3) 00:11:55.105 24069.585 - 24188.742: 99.5070% ( 3) 00:11:55.105 24188.742 - 24307.898: 99.5146% ( 1) 00:11:55.105 30742.342 - 30980.655: 99.5525% ( 5) 00:11:55.105 30980.655 - 31218.967: 99.5904% ( 5) 00:11:55.105 31218.967 - 31457.280: 99.6359% ( 6) 00:11:55.105 31457.280 - 31695.593: 99.6738% ( 5) 00:11:55.105 31695.593 - 31933.905: 99.7118% ( 5) 00:11:55.105 31933.905 - 32172.218: 99.7573% ( 6) 00:11:55.105 32172.218 - 32410.531: 99.7952% ( 5) 00:11:55.105 32410.531 - 32648.844: 99.8407% ( 6) 00:11:55.105 32648.844 - 32887.156: 99.8786% ( 5) 00:11:55.105 32887.156 - 33125.469: 99.9242% ( 6) 00:11:55.105 33125.469 - 33363.782: 99.9621% ( 5) 00:11:55.105 33363.782 - 33602.095: 100.0000% ( 5) 00:11:55.105 00:11:55.105 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:55.105 ============================================================================== 00:11:55.105 Range in us Cumulative IO count 00:11:55.105 3798.109 - 3813.004: 0.0228% ( 3) 00:11:55.105 3813.004 - 3842.793: 0.0379% ( 2) 00:11:55.105 3842.793 - 3872.582: 0.0531% ( 2) 00:11:55.105 3872.582 - 3902.371: 0.0758% ( 3) 00:11:55.105 3902.371 - 3932.160: 0.0910% ( 2) 00:11:55.105 3932.160 - 3961.949: 0.1138% ( 3) 00:11:55.105 3961.949 - 3991.738: 0.1289% ( 2) 00:11:55.105 3991.738 - 4021.527: 0.1517% ( 3) 00:11:55.105 4021.527 - 4051.316: 0.1669% ( 2) 00:11:55.105 4051.316 - 4081.105: 0.1896% ( 3) 00:11:55.105 4081.105 - 4110.895: 0.2048% ( 2) 00:11:55.105 4110.895 - 4140.684: 0.2275% ( 3) 00:11:55.105 4140.684 - 4170.473: 0.2427% ( 2) 00:11:55.105 4170.473 - 4200.262: 0.2655% ( 3) 00:11:55.105 4200.262 - 4230.051: 0.2731% ( 1) 00:11:55.105 4230.051 - 4259.840: 0.2958% ( 3) 00:11:55.105 4259.840 - 4289.629: 0.3110% ( 2) 00:11:55.105 4289.629 - 4319.418: 0.3337% ( 3) 00:11:55.105 4319.418 - 4349.207: 0.3489% ( 2) 00:11:55.105 4349.207 - 4378.996: 0.3717% ( 3) 00:11:55.105 4378.996 - 4408.785: 0.3944% ( 3) 00:11:55.105 4408.785 - 4438.575: 0.4096% ( 2) 00:11:55.105 4438.575 - 4468.364: 0.4323% ( 3) 00:11:55.105 4468.364 - 4498.153: 0.4551% ( 3) 00:11:55.105 4498.153 - 4527.942: 0.4779% ( 3) 00:11:55.105 4527.942 - 4557.731: 0.4854% ( 1) 00:11:55.105 6672.756 - 6702.545: 0.4930% ( 1) 00:11:55.105 6702.545 - 6732.335: 0.5158% ( 3) 00:11:55.105 6732.335 - 6762.124: 0.5385% ( 3) 00:11:55.105 6762.124 - 6791.913: 0.5537% ( 2) 00:11:55.105 6791.913 - 6821.702: 0.5765% ( 3) 00:11:55.105 6821.702 - 6851.491: 0.5916% ( 2) 00:11:55.105 6851.491 - 6881.280: 0.6144% ( 3) 00:11:55.105 6881.280 - 6911.069: 0.6296% ( 2) 00:11:55.105 6911.069 - 6940.858: 0.6371% ( 1) 00:11:55.105 6940.858 - 6970.647: 0.6599% ( 3) 00:11:55.105 6970.647 - 7000.436: 0.6751% ( 2) 00:11:55.105 7000.436 - 7030.225: 0.6902% ( 2) 00:11:55.105 7030.225 - 7060.015: 0.7130% ( 3) 00:11:55.105 7060.015 - 7089.804: 0.7282% ( 2) 00:11:55.105 7089.804 - 7119.593: 0.7509% ( 3) 00:11:55.105 7119.593 - 7149.382: 0.7661% ( 2) 00:11:55.105 7149.382 - 7179.171: 0.7888% ( 3) 00:11:55.105 7179.171 - 7208.960: 0.8040% ( 2) 00:11:55.105 7208.960 - 7238.749: 0.8192% ( 2) 00:11:55.105 7238.749 - 7268.538: 0.8419% ( 3) 00:11:55.105 7268.538 - 7298.327: 0.8571% ( 2) 00:11:55.105 7298.327 - 7328.116: 0.8647% ( 1) 00:11:55.105 7328.116 - 7357.905: 0.8799% ( 2) 00:11:55.105 7357.905 - 7387.695: 0.8950% ( 2) 00:11:55.105 7387.695 - 7417.484: 0.9102% ( 2) 00:11:55.105 7417.484 - 7447.273: 0.9254% ( 2) 00:11:55.105 7447.273 - 7477.062: 0.9405% ( 2) 00:11:55.105 7477.062 - 7506.851: 0.9557% ( 2) 00:11:55.105 7506.851 - 7536.640: 0.9709% ( 2) 00:11:55.105 7923.898 - 7983.476: 0.9860% ( 2) 00:11:55.105 7983.476 - 8043.055: 1.0088% ( 3) 00:11:55.105 8043.055 - 8102.633: 1.0543% ( 6) 00:11:55.105 8102.633 - 8162.211: 1.1377% ( 11) 00:11:55.105 8162.211 - 8221.789: 1.3122% ( 23) 00:11:55.105 8221.789 - 8281.367: 1.6156% ( 40) 00:11:55.105 8281.367 - 8340.945: 2.2300% ( 81) 00:11:55.105 8340.945 - 8400.524: 3.0871% ( 113) 00:11:55.105 8400.524 - 8460.102: 4.2855% ( 158) 00:11:55.105 8460.102 - 8519.680: 5.8101% ( 201) 00:11:55.105 8519.680 - 8579.258: 7.7442% ( 255) 00:11:55.105 8579.258 - 8638.836: 9.9059% ( 285) 00:11:55.105 8638.836 - 8698.415: 12.5910% ( 354) 00:11:55.105 8698.415 - 8757.993: 15.5112% ( 385) 00:11:55.105 8757.993 - 8817.571: 18.6438% ( 413) 00:11:55.105 8817.571 - 8877.149: 21.9736% ( 439) 00:11:55.105 8877.149 - 8936.727: 25.6826% ( 489) 00:11:55.105 8936.727 - 8996.305: 29.4524% ( 497) 00:11:55.105 8996.305 - 9055.884: 33.3965% ( 520) 00:11:55.105 9055.884 - 9115.462: 37.5000% ( 541) 00:11:55.105 9115.462 - 9175.040: 41.6490% ( 547) 00:11:55.105 9175.040 - 9234.618: 45.7145% ( 536) 00:11:55.105 9234.618 - 9294.196: 49.6890% ( 524) 00:11:55.105 9294.196 - 9353.775: 53.6104% ( 517) 00:11:55.105 9353.775 - 9413.353: 57.3650% ( 495) 00:11:55.105 9413.353 - 9472.931: 61.1423% ( 498) 00:11:55.105 9472.931 - 9532.509: 64.8210% ( 485) 00:11:55.105 9532.509 - 9592.087: 68.1735% ( 442) 00:11:55.105 9592.087 - 9651.665: 71.3820% ( 423) 00:11:55.105 9651.665 - 9711.244: 74.2643% ( 380) 00:11:55.105 9711.244 - 9770.822: 76.8811% ( 345) 00:11:55.105 9770.822 - 9830.400: 79.1945% ( 305) 00:11:55.105 9830.400 - 9889.978: 81.3031% ( 278) 00:11:55.105 9889.978 - 9949.556: 83.3283% ( 267) 00:11:55.105 9949.556 - 10009.135: 85.1107% ( 235) 00:11:55.105 10009.135 - 10068.713: 86.6960% ( 209) 00:11:55.105 10068.713 - 10128.291: 88.0765% ( 182) 00:11:55.105 10128.291 - 10187.869: 89.3735% ( 171) 00:11:55.105 10187.869 - 10247.447: 90.5795% ( 159) 00:11:55.105 10247.447 - 10307.025: 91.7552% ( 155) 00:11:55.105 10307.025 - 10366.604: 92.8095% ( 139) 00:11:55.105 10366.604 - 10426.182: 93.6742% ( 114) 00:11:55.105 10426.182 - 10485.760: 94.4175% ( 98) 00:11:55.105 10485.760 - 10545.338: 94.8726% ( 60) 00:11:55.105 10545.338 - 10604.916: 95.2594% ( 51) 00:11:55.105 10604.916 - 10664.495: 95.5400% ( 37) 00:11:55.105 10664.495 - 10724.073: 95.7145% ( 23) 00:11:55.105 10724.073 - 10783.651: 95.8207% ( 14) 00:11:55.105 10783.651 - 10843.229: 95.9117% ( 12) 00:11:55.105 10843.229 - 10902.807: 95.9876% ( 10) 00:11:55.105 10902.807 - 10962.385: 96.0407% ( 7) 00:11:55.105 10962.385 - 11021.964: 96.0558% ( 2) 00:11:55.105 11021.964 - 11081.542: 96.0710% ( 2) 00:11:55.105 11081.542 - 11141.120: 96.0862% ( 2) 00:11:55.105 11141.120 - 11200.698: 96.1013% ( 2) 00:11:55.105 11200.698 - 11260.276: 96.1165% ( 2) 00:11:55.105 12690.153 - 12749.731: 96.1241% ( 1) 00:11:55.105 12749.731 - 12809.309: 96.1317% ( 1) 00:11:55.105 12809.309 - 12868.887: 96.1544% ( 3) 00:11:55.105 12868.887 - 12928.465: 96.1696% ( 2) 00:11:55.105 12928.465 - 12988.044: 96.1924% ( 3) 00:11:55.105 12988.044 - 13047.622: 96.2075% ( 2) 00:11:55.105 13047.622 - 13107.200: 96.2303% ( 3) 00:11:55.105 13107.200 - 13166.778: 96.2454% ( 2) 00:11:55.105 13166.778 - 13226.356: 96.2682% ( 3) 00:11:55.105 13226.356 - 13285.935: 96.2910% ( 3) 00:11:55.105 13285.935 - 13345.513: 96.3061% ( 2) 00:11:55.105 13345.513 - 13405.091: 96.3213% ( 2) 00:11:55.105 13405.091 - 13464.669: 96.3441% ( 3) 00:11:55.105 13464.669 - 13524.247: 96.3592% ( 2) 00:11:55.105 13524.247 - 13583.825: 96.3820% ( 3) 00:11:55.105 13583.825 - 13643.404: 96.4275% ( 6) 00:11:55.105 13643.404 - 13702.982: 96.4730% ( 6) 00:11:55.105 13702.982 - 13762.560: 96.5261% ( 7) 00:11:55.105 13762.560 - 13822.138: 96.5640% ( 5) 00:11:55.105 13822.138 - 13881.716: 96.6247% ( 8) 00:11:55.105 13881.716 - 13941.295: 96.6702% ( 6) 00:11:55.105 13941.295 - 14000.873: 96.7461% ( 10) 00:11:55.105 14000.873 - 14060.451: 96.8143% ( 9) 00:11:55.105 14060.451 - 14120.029: 96.8902% ( 10) 00:11:55.105 14120.029 - 14179.607: 96.9812% ( 12) 00:11:55.105 14179.607 - 14239.185: 97.0495% ( 9) 00:11:55.105 14239.185 - 14298.764: 97.0950% ( 6) 00:11:55.105 14298.764 - 14358.342: 97.1481% ( 7) 00:11:55.105 14358.342 - 14417.920: 97.2163% ( 9) 00:11:55.105 14417.920 - 14477.498: 97.2922% ( 10) 00:11:55.105 14477.498 - 14537.076: 97.3529% ( 8) 00:11:55.105 14537.076 - 14596.655: 97.4287% ( 10) 00:11:55.105 14596.655 - 14656.233: 97.5046% ( 10) 00:11:55.105 14656.233 - 14715.811: 97.5501% ( 6) 00:11:55.105 14715.811 - 14775.389: 97.6259% ( 10) 00:11:55.105 14775.389 - 14834.967: 97.6866% ( 8) 00:11:55.105 14834.967 - 14894.545: 97.7624% ( 10) 00:11:55.105 14894.545 - 14954.124: 97.8383% ( 10) 00:11:55.105 14954.124 - 15013.702: 97.9066% ( 9) 00:11:55.105 15013.702 - 15073.280: 97.9748% ( 9) 00:11:55.105 15073.280 - 15132.858: 98.0279% ( 7) 00:11:55.105 15132.858 - 15192.436: 98.0962% ( 9) 00:11:55.105 15192.436 - 15252.015: 98.1644% ( 9) 00:11:55.105 15252.015 - 15371.171: 98.3010% ( 18) 00:11:55.105 15371.171 - 15490.327: 98.4375% ( 18) 00:11:55.105 15490.327 - 15609.484: 98.5361% ( 13) 00:11:55.105 15609.484 - 15728.640: 98.6195% ( 11) 00:11:55.105 15728.640 - 15847.796: 98.7333% ( 15) 00:11:55.105 15847.796 - 15966.953: 98.8092% ( 10) 00:11:55.105 15966.953 - 16086.109: 98.8698% ( 8) 00:11:55.105 16086.109 - 16205.265: 98.9154% ( 6) 00:11:55.106 16205.265 - 16324.422: 98.9457% ( 4) 00:11:55.106 16324.422 - 16443.578: 98.9684% ( 3) 00:11:55.106 16443.578 - 16562.735: 98.9912% ( 3) 00:11:55.106 16562.735 - 16681.891: 99.0140% ( 3) 00:11:55.106 16681.891 - 16801.047: 99.0291% ( 2) 00:11:55.106 19660.800 - 19779.956: 99.0443% ( 2) 00:11:55.106 19779.956 - 19899.113: 99.0746% ( 4) 00:11:55.106 19899.113 - 20018.269: 99.0898% ( 2) 00:11:55.106 20018.269 - 20137.425: 99.1126% ( 3) 00:11:55.106 20137.425 - 20256.582: 99.1353% ( 3) 00:11:55.106 20256.582 - 20375.738: 99.1581% ( 3) 00:11:55.106 20375.738 - 20494.895: 99.1732% ( 2) 00:11:55.106 20494.895 - 20614.051: 99.1960% ( 3) 00:11:55.106 20614.051 - 20733.207: 99.2188% ( 3) 00:11:55.106 20733.207 - 20852.364: 99.2415% ( 3) 00:11:55.106 20852.364 - 20971.520: 99.2643% ( 3) 00:11:55.106 20971.520 - 21090.676: 99.2794% ( 2) 00:11:55.106 21090.676 - 21209.833: 99.3022% ( 3) 00:11:55.106 21209.833 - 21328.989: 99.3249% ( 3) 00:11:55.106 21328.989 - 21448.145: 99.3477% ( 3) 00:11:55.106 21448.145 - 21567.302: 99.3704% ( 3) 00:11:55.106 21567.302 - 21686.458: 99.3856% ( 2) 00:11:55.106 21686.458 - 21805.615: 99.4084% ( 3) 00:11:55.106 21805.615 - 21924.771: 99.4311% ( 3) 00:11:55.106 21924.771 - 22043.927: 99.4539% ( 3) 00:11:55.106 22043.927 - 22163.084: 99.4766% ( 3) 00:11:55.106 22163.084 - 22282.240: 99.4918% ( 2) 00:11:55.106 22282.240 - 22401.396: 99.5146% ( 3) 00:11:55.106 28954.996 - 29074.153: 99.5221% ( 1) 00:11:55.106 29074.153 - 29193.309: 99.5449% ( 3) 00:11:55.106 29193.309 - 29312.465: 99.5601% ( 2) 00:11:55.106 29312.465 - 29431.622: 99.5828% ( 3) 00:11:55.106 29431.622 - 29550.778: 99.6056% ( 3) 00:11:55.106 29550.778 - 29669.935: 99.6283% ( 3) 00:11:55.106 29669.935 - 29789.091: 99.6435% ( 2) 00:11:55.106 29789.091 - 29908.247: 99.6663% ( 3) 00:11:55.106 29908.247 - 30027.404: 99.6890% ( 3) 00:11:55.106 30027.404 - 30146.560: 99.7042% ( 2) 00:11:55.106 30146.560 - 30265.716: 99.7194% ( 2) 00:11:55.106 30265.716 - 30384.873: 99.7421% ( 3) 00:11:55.106 30384.873 - 30504.029: 99.7649% ( 3) 00:11:55.106 30504.029 - 30742.342: 99.8028% ( 5) 00:11:55.106 30742.342 - 30980.655: 99.8483% ( 6) 00:11:55.106 30980.655 - 31218.967: 99.8938% ( 6) 00:11:55.106 31218.967 - 31457.280: 99.9317% ( 5) 00:11:55.106 31457.280 - 31695.593: 99.9772% ( 6) 00:11:55.106 31695.593 - 31933.905: 100.0000% ( 3) 00:11:55.106 00:11:55.106 15:38:43 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:56.196 Initializing NVMe Controllers 00:11:56.196 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:56.196 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:56.196 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:56.196 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:56.196 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:56.196 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:56.196 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:56.196 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:56.196 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:56.196 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:56.196 Initialization complete. Launching workers. 00:11:56.196 ======================================================== 00:11:56.196 Latency(us) 00:11:56.196 Device Information : IOPS MiB/s Average min max 00:11:56.196 PCIE (0000:00:13.0) NSID 1 from core 0: 11721.47 137.36 10931.01 8116.30 36822.77 00:11:56.196 PCIE (0000:00:10.0) NSID 1 from core 0: 11721.47 137.36 10916.59 7617.27 36139.89 00:11:56.196 PCIE (0000:00:11.0) NSID 1 from core 0: 11721.47 137.36 10902.98 7638.75 34844.41 00:11:56.196 PCIE (0000:00:12.0) NSID 1 from core 0: 11721.47 137.36 10888.32 6618.85 34470.93 00:11:56.196 PCIE (0000:00:12.0) NSID 2 from core 0: 11721.47 137.36 10874.19 6551.43 33500.79 00:11:56.196 PCIE (0000:00:12.0) NSID 3 from core 0: 11721.47 137.36 10859.72 6302.69 32438.35 00:11:56.196 ======================================================== 00:11:56.196 Total : 70328.83 824.17 10895.47 6302.69 36822.77 00:11:56.196 00:11:56.196 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:56.196 ================================================================================= 00:11:56.196 1.00000% : 9175.040us 00:11:56.196 10.00000% : 9949.556us 00:11:56.196 25.00000% : 10247.447us 00:11:56.196 50.00000% : 10664.495us 00:11:56.196 75.00000% : 11200.698us 00:11:56.196 90.00000% : 11736.902us 00:11:56.196 95.00000% : 12213.527us 00:11:56.196 98.00000% : 12928.465us 00:11:56.196 99.00000% : 26571.869us 00:11:56.196 99.50000% : 35746.909us 00:11:56.196 99.90000% : 36700.160us 00:11:56.196 99.99000% : 36938.473us 00:11:56.196 99.99900% : 36938.473us 00:11:56.196 99.99990% : 36938.473us 00:11:56.196 99.99999% : 36938.473us 00:11:56.196 00:11:56.197 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:56.197 ================================================================================= 00:11:56.197 1.00000% : 9055.884us 00:11:56.197 10.00000% : 9830.400us 00:11:56.197 25.00000% : 10187.869us 00:11:56.197 50.00000% : 10664.495us 00:11:56.197 75.00000% : 11200.698us 00:11:56.197 90.00000% : 11856.058us 00:11:56.197 95.00000% : 12213.527us 00:11:56.197 98.00000% : 12809.309us 00:11:56.197 99.00000% : 27286.807us 00:11:56.197 99.50000% : 34793.658us 00:11:56.197 99.90000% : 35985.222us 00:11:56.197 99.99000% : 36223.535us 00:11:56.197 99.99900% : 36223.535us 00:11:56.197 99.99990% : 36223.535us 00:11:56.197 99.99999% : 36223.535us 00:11:56.197 00:11:56.197 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:56.197 ================================================================================= 00:11:56.197 1.00000% : 9234.618us 00:11:56.197 10.00000% : 9949.556us 00:11:56.197 25.00000% : 10247.447us 00:11:56.197 50.00000% : 10604.916us 00:11:56.197 75.00000% : 11200.698us 00:11:56.197 90.00000% : 11736.902us 00:11:56.197 95.00000% : 12034.793us 00:11:56.197 98.00000% : 12690.153us 00:11:56.197 99.00000% : 26452.713us 00:11:56.197 99.50000% : 33602.095us 00:11:56.197 99.90000% : 34793.658us 00:11:56.197 99.99000% : 35031.971us 00:11:56.197 99.99900% : 35031.971us 00:11:56.197 99.99990% : 35031.971us 00:11:56.197 99.99999% : 35031.971us 00:11:56.197 00:11:56.197 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:56.197 ================================================================================= 00:11:56.197 1.00000% : 8936.727us 00:11:56.197 10.00000% : 9949.556us 00:11:56.197 25.00000% : 10247.447us 00:11:56.197 50.00000% : 10604.916us 00:11:56.197 75.00000% : 11200.698us 00:11:56.197 90.00000% : 11736.902us 00:11:56.197 95.00000% : 12094.371us 00:11:56.197 98.00000% : 12630.575us 00:11:56.197 99.00000% : 26452.713us 00:11:56.197 99.50000% : 33125.469us 00:11:56.197 99.90000% : 34317.033us 00:11:56.197 99.99000% : 34555.345us 00:11:56.197 99.99900% : 34555.345us 00:11:56.197 99.99990% : 34555.345us 00:11:56.197 99.99999% : 34555.345us 00:11:56.197 00:11:56.197 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:56.197 ================================================================================= 00:11:56.197 1.00000% : 8877.149us 00:11:56.197 10.00000% : 9949.556us 00:11:56.197 25.00000% : 10247.447us 00:11:56.197 50.00000% : 10604.916us 00:11:56.197 75.00000% : 11200.698us 00:11:56.197 90.00000% : 11736.902us 00:11:56.197 95.00000% : 12094.371us 00:11:56.197 98.00000% : 12630.575us 00:11:56.197 99.00000% : 25976.087us 00:11:56.197 99.50000% : 32172.218us 00:11:56.197 99.90000% : 33363.782us 00:11:56.197 99.99000% : 33602.095us 00:11:56.197 99.99900% : 33602.095us 00:11:56.197 99.99990% : 33602.095us 00:11:56.197 99.99999% : 33602.095us 00:11:56.197 00:11:56.197 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:56.197 ================================================================================= 00:11:56.197 1.00000% : 8757.993us 00:11:56.197 10.00000% : 9889.978us 00:11:56.197 25.00000% : 10247.447us 00:11:56.197 50.00000% : 10604.916us 00:11:56.197 75.00000% : 11200.698us 00:11:56.197 90.00000% : 11736.902us 00:11:56.197 95.00000% : 12094.371us 00:11:56.197 98.00000% : 12630.575us 00:11:56.197 99.00000% : 25141.993us 00:11:56.197 99.50000% : 31218.967us 00:11:56.197 99.90000% : 32172.218us 00:11:56.197 99.99000% : 32410.531us 00:11:56.197 99.99900% : 32648.844us 00:11:56.197 99.99990% : 32648.844us 00:11:56.197 99.99999% : 32648.844us 00:11:56.197 00:11:56.197 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:56.197 ============================================================================== 00:11:56.197 Range in us Cumulative IO count 00:11:56.197 8102.633 - 8162.211: 0.0340% ( 4) 00:11:56.197 8162.211 - 8221.789: 0.1019% ( 8) 00:11:56.197 8221.789 - 8281.367: 0.1359% ( 4) 00:11:56.197 8281.367 - 8340.945: 0.2972% ( 19) 00:11:56.197 8340.945 - 8400.524: 0.3736% ( 9) 00:11:56.197 8400.524 - 8460.102: 0.4076% ( 4) 00:11:56.197 8460.102 - 8519.680: 0.4331% ( 3) 00:11:56.197 8519.680 - 8579.258: 0.4501% ( 2) 00:11:56.197 8579.258 - 8638.836: 0.4840% ( 4) 00:11:56.197 8638.836 - 8698.415: 0.5095% ( 3) 00:11:56.197 8698.415 - 8757.993: 0.5520% ( 5) 00:11:56.197 8817.571 - 8877.149: 0.5605% ( 1) 00:11:56.197 8877.149 - 8936.727: 0.5859% ( 3) 00:11:56.197 8936.727 - 8996.305: 0.6369% ( 6) 00:11:56.197 8996.305 - 9055.884: 0.7133% ( 9) 00:11:56.197 9055.884 - 9115.462: 0.7982% ( 10) 00:11:56.197 9115.462 - 9175.040: 1.0954% ( 35) 00:11:56.197 9175.040 - 9234.618: 1.3332% ( 28) 00:11:56.197 9234.618 - 9294.196: 1.6050% ( 32) 00:11:56.197 9294.196 - 9353.775: 2.0720% ( 55) 00:11:56.197 9353.775 - 9413.353: 2.5136% ( 52) 00:11:56.197 9413.353 - 9472.931: 3.0656% ( 65) 00:11:56.197 9472.931 - 9532.509: 3.8298% ( 90) 00:11:56.197 9532.509 - 9592.087: 4.5177% ( 81) 00:11:56.197 9592.087 - 9651.665: 5.0526% ( 63) 00:11:56.197 9651.665 - 9711.244: 5.8084% ( 89) 00:11:56.197 9711.244 - 9770.822: 6.6916% ( 104) 00:11:56.197 9770.822 - 9830.400: 7.8635% ( 138) 00:11:56.197 9830.400 - 9889.978: 9.4684% ( 189) 00:11:56.197 9889.978 - 9949.556: 11.3791% ( 225) 00:11:56.197 9949.556 - 10009.135: 14.3597% ( 351) 00:11:56.197 10009.135 - 10068.713: 17.0346% ( 315) 00:11:56.197 10068.713 - 10128.291: 20.3040% ( 385) 00:11:56.197 10128.291 - 10187.869: 23.8791% ( 421) 00:11:56.197 10187.869 - 10247.447: 27.4541% ( 421) 00:11:56.197 10247.447 - 10307.025: 30.7999% ( 394) 00:11:56.197 10307.025 - 10366.604: 34.1797% ( 398) 00:11:56.197 10366.604 - 10426.182: 37.9586% ( 445) 00:11:56.197 10426.182 - 10485.760: 41.6185% ( 431) 00:11:56.197 10485.760 - 10545.338: 46.0258% ( 519) 00:11:56.197 10545.338 - 10604.916: 49.8811% ( 454) 00:11:56.197 10604.916 - 10664.495: 53.2099% ( 392) 00:11:56.197 10664.495 - 10724.073: 56.3944% ( 375) 00:11:56.197 10724.073 - 10783.651: 59.4854% ( 364) 00:11:56.197 10783.651 - 10843.229: 62.3556% ( 338) 00:11:56.197 10843.229 - 10902.807: 64.5805% ( 262) 00:11:56.197 10902.807 - 10962.385: 67.2045% ( 309) 00:11:56.197 10962.385 - 11021.964: 69.7011% ( 294) 00:11:56.197 11021.964 - 11081.542: 72.1637% ( 290) 00:11:56.197 11081.542 - 11141.120: 74.2188% ( 242) 00:11:56.197 11141.120 - 11200.698: 76.3077% ( 246) 00:11:56.197 11200.698 - 11260.276: 78.4137% ( 248) 00:11:56.197 11260.276 - 11319.855: 80.7490% ( 275) 00:11:56.197 11319.855 - 11379.433: 82.7615% ( 237) 00:11:56.197 11379.433 - 11439.011: 84.2901% ( 180) 00:11:56.197 11439.011 - 11498.589: 85.6148% ( 156) 00:11:56.197 11498.589 - 11558.167: 86.8207% ( 142) 00:11:56.197 11558.167 - 11617.745: 87.9246% ( 130) 00:11:56.197 11617.745 - 11677.324: 89.0285% ( 130) 00:11:56.197 11677.324 - 11736.902: 90.0476% ( 120) 00:11:56.197 11736.902 - 11796.480: 91.1175% ( 126) 00:11:56.197 11796.480 - 11856.058: 92.0007% ( 104) 00:11:56.197 11856.058 - 11915.636: 92.8923% ( 105) 00:11:56.197 11915.636 - 11975.215: 93.4783% ( 69) 00:11:56.197 11975.215 - 12034.793: 93.8519% ( 44) 00:11:56.197 12034.793 - 12094.371: 94.2765% ( 50) 00:11:56.197 12094.371 - 12153.949: 94.7181% ( 52) 00:11:56.197 12153.949 - 12213.527: 95.0662% ( 41) 00:11:56.197 12213.527 - 12273.105: 95.4908% ( 50) 00:11:56.197 12273.105 - 12332.684: 95.8305% ( 40) 00:11:56.197 12332.684 - 12392.262: 96.1107% ( 33) 00:11:56.197 12392.262 - 12451.840: 96.4249% ( 37) 00:11:56.197 12451.840 - 12511.418: 96.7221% ( 35) 00:11:56.197 12511.418 - 12570.996: 96.9854% ( 31) 00:11:56.197 12570.996 - 12630.575: 97.1892% ( 24) 00:11:56.197 12630.575 - 12690.153: 97.3760% ( 22) 00:11:56.197 12690.153 - 12749.731: 97.6478% ( 32) 00:11:56.197 12749.731 - 12809.309: 97.8261% ( 21) 00:11:56.197 12809.309 - 12868.887: 97.9620% ( 16) 00:11:56.197 12868.887 - 12928.465: 98.0724% ( 13) 00:11:56.197 12928.465 - 12988.044: 98.1403% ( 8) 00:11:56.197 12988.044 - 13047.622: 98.2082% ( 8) 00:11:56.197 13047.622 - 13107.200: 98.2762% ( 8) 00:11:56.197 13107.200 - 13166.778: 98.3526% ( 9) 00:11:56.197 13166.778 - 13226.356: 98.4035% ( 6) 00:11:56.197 13226.356 - 13285.935: 98.5054% ( 12) 00:11:56.197 13285.935 - 13345.513: 98.6073% ( 12) 00:11:56.197 13345.513 - 13405.091: 98.6838% ( 9) 00:11:56.197 13405.091 - 13464.669: 98.6923% ( 1) 00:11:56.197 13464.669 - 13524.247: 98.7007% ( 1) 00:11:56.197 13524.247 - 13583.825: 98.7092% ( 1) 00:11:56.197 13583.825 - 13643.404: 98.7347% ( 3) 00:11:56.197 13643.404 - 13702.982: 98.7432% ( 1) 00:11:56.197 13702.982 - 13762.560: 98.7602% ( 2) 00:11:56.197 13762.560 - 13822.138: 98.7772% ( 2) 00:11:56.197 13822.138 - 13881.716: 98.8026% ( 3) 00:11:56.197 13881.716 - 13941.295: 98.8111% ( 1) 00:11:56.197 13941.295 - 14000.873: 98.8281% ( 2) 00:11:56.197 14000.873 - 14060.451: 98.8536% ( 3) 00:11:56.197 14060.451 - 14120.029: 98.8621% ( 1) 00:11:56.197 14120.029 - 14179.607: 98.8791% ( 2) 00:11:56.197 14179.607 - 14239.185: 98.8876% ( 1) 00:11:56.197 14239.185 - 14298.764: 98.8961% ( 1) 00:11:56.198 14298.764 - 14358.342: 98.9130% ( 2) 00:11:56.198 26095.244 - 26214.400: 98.9385% ( 3) 00:11:56.198 26214.400 - 26333.556: 98.9640% ( 3) 00:11:56.198 26333.556 - 26452.713: 98.9980% ( 4) 00:11:56.198 26452.713 - 26571.869: 99.0234% ( 3) 00:11:56.198 26571.869 - 26691.025: 99.0574% ( 4) 00:11:56.198 26691.025 - 26810.182: 99.0914% ( 4) 00:11:56.198 26810.182 - 26929.338: 99.1253% ( 4) 00:11:56.198 26929.338 - 27048.495: 99.1423% ( 2) 00:11:56.198 27048.495 - 27167.651: 99.1678% ( 3) 00:11:56.198 27167.651 - 27286.807: 99.2103% ( 5) 00:11:56.198 27286.807 - 27405.964: 99.2357% ( 3) 00:11:56.198 27405.964 - 27525.120: 99.2697% ( 4) 00:11:56.198 27525.120 - 27644.276: 99.2952% ( 3) 00:11:56.198 27644.276 - 27763.433: 99.3207% ( 3) 00:11:56.198 27763.433 - 27882.589: 99.3546% ( 4) 00:11:56.198 27882.589 - 28001.745: 99.3886% ( 4) 00:11:56.198 28001.745 - 28120.902: 99.4226% ( 4) 00:11:56.198 28120.902 - 28240.058: 99.4480% ( 3) 00:11:56.198 28240.058 - 28359.215: 99.4565% ( 1) 00:11:56.198 35270.284 - 35508.596: 99.4990% ( 5) 00:11:56.198 35508.596 - 35746.909: 99.5839% ( 10) 00:11:56.198 35746.909 - 35985.222: 99.6773% ( 11) 00:11:56.198 35985.222 - 36223.535: 99.7707% ( 11) 00:11:56.198 36223.535 - 36461.847: 99.8726% ( 12) 00:11:56.198 36461.847 - 36700.160: 99.9575% ( 10) 00:11:56.198 36700.160 - 36938.473: 100.0000% ( 5) 00:11:56.198 00:11:56.198 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:56.198 ============================================================================== 00:11:56.198 Range in us Cumulative IO count 00:11:56.198 7596.218 - 7626.007: 0.0085% ( 1) 00:11:56.198 7685.585 - 7745.164: 0.0170% ( 1) 00:11:56.198 7745.164 - 7804.742: 0.0255% ( 1) 00:11:56.198 7804.742 - 7864.320: 0.0934% ( 8) 00:11:56.198 7864.320 - 7923.898: 0.2038% ( 13) 00:11:56.198 7923.898 - 7983.476: 0.2717% ( 8) 00:11:56.198 7983.476 - 8043.055: 0.3057% ( 4) 00:11:56.198 8162.211 - 8221.789: 0.3312% ( 3) 00:11:56.198 8221.789 - 8281.367: 0.3482% ( 2) 00:11:56.198 8281.367 - 8340.945: 0.3651% ( 2) 00:11:56.198 8340.945 - 8400.524: 0.3906% ( 3) 00:11:56.198 8400.524 - 8460.102: 0.3991% ( 1) 00:11:56.198 8460.102 - 8519.680: 0.4246% ( 3) 00:11:56.198 8519.680 - 8579.258: 0.4501% ( 3) 00:11:56.198 8579.258 - 8638.836: 0.4755% ( 3) 00:11:56.198 8638.836 - 8698.415: 0.5010% ( 3) 00:11:56.198 8698.415 - 8757.993: 0.5350% ( 4) 00:11:56.198 8757.993 - 8817.571: 0.5520% ( 2) 00:11:56.198 8817.571 - 8877.149: 0.6284% ( 9) 00:11:56.198 8877.149 - 8936.727: 0.7218% ( 11) 00:11:56.198 8936.727 - 8996.305: 0.9851% ( 31) 00:11:56.198 8996.305 - 9055.884: 1.1464% ( 19) 00:11:56.198 9055.884 - 9115.462: 1.4946% ( 41) 00:11:56.198 9115.462 - 9175.040: 1.9276% ( 51) 00:11:56.198 9175.040 - 9234.618: 2.2758% ( 41) 00:11:56.198 9234.618 - 9294.196: 2.5560% ( 33) 00:11:56.198 9294.196 - 9353.775: 2.8108% ( 30) 00:11:56.198 9353.775 - 9413.353: 3.1080% ( 35) 00:11:56.198 9413.353 - 9472.931: 3.5666% ( 54) 00:11:56.198 9472.931 - 9532.509: 4.0336% ( 55) 00:11:56.198 9532.509 - 9592.087: 4.4158% ( 45) 00:11:56.198 9592.087 - 9651.665: 5.0102% ( 70) 00:11:56.198 9651.665 - 9711.244: 6.6576% ( 194) 00:11:56.198 9711.244 - 9770.822: 8.7891% ( 251) 00:11:56.198 9770.822 - 9830.400: 11.2857% ( 294) 00:11:56.198 9830.400 - 9889.978: 13.8162% ( 298) 00:11:56.198 9889.978 - 9949.556: 16.5506% ( 322) 00:11:56.198 9949.556 - 10009.135: 18.6396% ( 246) 00:11:56.198 10009.135 - 10068.713: 20.9918% ( 277) 00:11:56.198 10068.713 - 10128.291: 23.9555% ( 349) 00:11:56.198 10128.291 - 10187.869: 26.5370% ( 304) 00:11:56.198 10187.869 - 10247.447: 29.1780% ( 311) 00:11:56.198 10247.447 - 10307.025: 32.9484% ( 444) 00:11:56.198 10307.025 - 10366.604: 36.7188% ( 444) 00:11:56.198 10366.604 - 10426.182: 40.2004% ( 410) 00:11:56.198 10426.182 - 10485.760: 43.1810% ( 351) 00:11:56.198 10485.760 - 10545.338: 46.4334% ( 383) 00:11:56.198 10545.338 - 10604.916: 49.6094% ( 374) 00:11:56.198 10604.916 - 10664.495: 52.1909% ( 304) 00:11:56.198 10664.495 - 10724.073: 54.5771% ( 281) 00:11:56.198 10724.073 - 10783.651: 57.4304% ( 336) 00:11:56.198 10783.651 - 10843.229: 60.4789% ( 359) 00:11:56.198 10843.229 - 10902.807: 63.4426% ( 349) 00:11:56.198 10902.807 - 10962.385: 65.9562% ( 296) 00:11:56.198 10962.385 - 11021.964: 68.1980% ( 264) 00:11:56.198 11021.964 - 11081.542: 70.7116% ( 296) 00:11:56.198 11081.542 - 11141.120: 72.9280% ( 261) 00:11:56.198 11141.120 - 11200.698: 75.2123% ( 269) 00:11:56.198 11200.698 - 11260.276: 77.4626% ( 265) 00:11:56.198 11260.276 - 11319.855: 79.1950% ( 204) 00:11:56.198 11319.855 - 11379.433: 80.8169% ( 191) 00:11:56.198 11379.433 - 11439.011: 82.2775% ( 172) 00:11:56.198 11439.011 - 11498.589: 83.7126% ( 169) 00:11:56.198 11498.589 - 11558.167: 85.1223% ( 166) 00:11:56.198 11558.167 - 11617.745: 86.3281% ( 142) 00:11:56.198 11617.745 - 11677.324: 87.4915% ( 137) 00:11:56.198 11677.324 - 11736.902: 88.7398% ( 147) 00:11:56.198 11736.902 - 11796.480: 89.7334% ( 117) 00:11:56.198 11796.480 - 11856.058: 90.7609% ( 121) 00:11:56.198 11856.058 - 11915.636: 91.7459% ( 116) 00:11:56.198 11915.636 - 11975.215: 92.6885% ( 111) 00:11:56.198 11975.215 - 12034.793: 93.3933% ( 83) 00:11:56.198 12034.793 - 12094.371: 93.9963% ( 71) 00:11:56.198 12094.371 - 12153.949: 94.6247% ( 74) 00:11:56.198 12153.949 - 12213.527: 95.1427% ( 61) 00:11:56.198 12213.527 - 12273.105: 95.5927% ( 53) 00:11:56.198 12273.105 - 12332.684: 96.1447% ( 65) 00:11:56.198 12332.684 - 12392.262: 96.5693% ( 50) 00:11:56.198 12392.262 - 12451.840: 96.9175% ( 41) 00:11:56.198 12451.840 - 12511.418: 97.1807% ( 31) 00:11:56.198 12511.418 - 12570.996: 97.4355% ( 30) 00:11:56.198 12570.996 - 12630.575: 97.6647% ( 27) 00:11:56.198 12630.575 - 12690.153: 97.8346% ( 20) 00:11:56.198 12690.153 - 12749.731: 97.9959% ( 19) 00:11:56.198 12749.731 - 12809.309: 98.0724% ( 9) 00:11:56.198 12809.309 - 12868.887: 98.1658% ( 11) 00:11:56.198 12868.887 - 12928.465: 98.2592% ( 11) 00:11:56.198 12928.465 - 12988.044: 98.3526% ( 11) 00:11:56.198 12988.044 - 13047.622: 98.4630% ( 13) 00:11:56.198 13047.622 - 13107.200: 98.5564% ( 11) 00:11:56.198 13107.200 - 13166.778: 98.6413% ( 10) 00:11:56.198 13166.778 - 13226.356: 98.7007% ( 7) 00:11:56.198 13226.356 - 13285.935: 98.7517% ( 6) 00:11:56.198 13285.935 - 13345.513: 98.8281% ( 9) 00:11:56.198 13345.513 - 13405.091: 98.8451% ( 2) 00:11:56.198 13405.091 - 13464.669: 98.8621% ( 2) 00:11:56.198 13464.669 - 13524.247: 98.8876% ( 3) 00:11:56.198 13524.247 - 13583.825: 98.9130% ( 3) 00:11:56.198 26929.338 - 27048.495: 98.9470% ( 4) 00:11:56.198 27048.495 - 27167.651: 98.9810% ( 4) 00:11:56.198 27167.651 - 27286.807: 99.0319% ( 6) 00:11:56.198 27286.807 - 27405.964: 99.0574% ( 3) 00:11:56.198 27405.964 - 27525.120: 99.0829% ( 3) 00:11:56.198 27525.120 - 27644.276: 99.1168% ( 4) 00:11:56.198 27644.276 - 27763.433: 99.1593% ( 5) 00:11:56.198 27763.433 - 27882.589: 99.1848% ( 3) 00:11:56.198 27882.589 - 28001.745: 99.2272% ( 5) 00:11:56.198 28001.745 - 28120.902: 99.2612% ( 4) 00:11:56.198 28120.902 - 28240.058: 99.2952% ( 4) 00:11:56.198 28240.058 - 28359.215: 99.3376% ( 5) 00:11:56.198 28359.215 - 28478.371: 99.3631% ( 3) 00:11:56.198 28478.371 - 28597.527: 99.4056% ( 5) 00:11:56.198 28597.527 - 28716.684: 99.4480% ( 5) 00:11:56.198 28716.684 - 28835.840: 99.4565% ( 1) 00:11:56.198 34317.033 - 34555.345: 99.4990% ( 5) 00:11:56.198 34555.345 - 34793.658: 99.5754% ( 9) 00:11:56.198 34793.658 - 35031.971: 99.6518% ( 9) 00:11:56.198 35031.971 - 35270.284: 99.7198% ( 8) 00:11:56.198 35270.284 - 35508.596: 99.7877% ( 8) 00:11:56.198 35508.596 - 35746.909: 99.8811% ( 11) 00:11:56.198 35746.909 - 35985.222: 99.9575% ( 9) 00:11:56.198 35985.222 - 36223.535: 100.0000% ( 5) 00:11:56.198 00:11:56.198 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:56.198 ============================================================================== 00:11:56.198 Range in us Cumulative IO count 00:11:56.198 7626.007 - 7685.585: 0.0170% ( 2) 00:11:56.198 7685.585 - 7745.164: 0.0849% ( 8) 00:11:56.198 7745.164 - 7804.742: 0.1698% ( 10) 00:11:56.198 7804.742 - 7864.320: 0.2293% ( 7) 00:11:56.198 7864.320 - 7923.898: 0.3312% ( 12) 00:11:56.198 7923.898 - 7983.476: 0.3906% ( 7) 00:11:56.198 7983.476 - 8043.055: 0.4246% ( 4) 00:11:56.198 8043.055 - 8102.633: 0.4501% ( 3) 00:11:56.198 8102.633 - 8162.211: 0.4755% ( 3) 00:11:56.198 8162.211 - 8221.789: 0.4925% ( 2) 00:11:56.198 8221.789 - 8281.367: 0.5180% ( 3) 00:11:56.198 8281.367 - 8340.945: 0.5435% ( 3) 00:11:56.198 8877.149 - 8936.727: 0.5520% ( 1) 00:11:56.198 8936.727 - 8996.305: 0.6199% ( 8) 00:11:56.198 8996.305 - 9055.884: 0.6878% ( 8) 00:11:56.198 9055.884 - 9115.462: 0.7728% ( 10) 00:11:56.198 9115.462 - 9175.040: 0.9766% ( 24) 00:11:56.198 9175.040 - 9234.618: 1.2993% ( 38) 00:11:56.198 9234.618 - 9294.196: 1.5795% ( 33) 00:11:56.198 9294.196 - 9353.775: 2.0975% ( 61) 00:11:56.198 9353.775 - 9413.353: 2.7174% ( 73) 00:11:56.198 9413.353 - 9472.931: 3.2439% ( 62) 00:11:56.198 9472.931 - 9532.509: 4.0421% ( 94) 00:11:56.198 9532.509 - 9592.087: 4.5516% ( 60) 00:11:56.199 9592.087 - 9651.665: 5.0442% ( 58) 00:11:56.199 9651.665 - 9711.244: 5.7320% ( 81) 00:11:56.199 9711.244 - 9770.822: 6.5642% ( 98) 00:11:56.199 9770.822 - 9830.400: 7.7870% ( 144) 00:11:56.199 9830.400 - 9889.978: 9.4090% ( 191) 00:11:56.199 9889.978 - 9949.556: 11.4895% ( 245) 00:11:56.199 9949.556 - 10009.135: 13.9691% ( 292) 00:11:56.199 10009.135 - 10068.713: 17.1196% ( 371) 00:11:56.199 10068.713 - 10128.291: 20.1257% ( 354) 00:11:56.199 10128.291 - 10187.869: 23.3441% ( 379) 00:11:56.199 10187.869 - 10247.447: 27.0126% ( 432) 00:11:56.199 10247.447 - 10307.025: 30.4263% ( 402) 00:11:56.199 10307.025 - 10366.604: 34.0268% ( 424) 00:11:56.199 10366.604 - 10426.182: 38.0520% ( 474) 00:11:56.199 10426.182 - 10485.760: 42.1281% ( 480) 00:11:56.199 10485.760 - 10545.338: 46.1702% ( 476) 00:11:56.199 10545.338 - 10604.916: 50.5095% ( 511) 00:11:56.199 10604.916 - 10664.495: 54.4327% ( 462) 00:11:56.199 10664.495 - 10724.073: 57.9229% ( 411) 00:11:56.199 10724.073 - 10783.651: 60.9630% ( 358) 00:11:56.199 10783.651 - 10843.229: 63.5275% ( 302) 00:11:56.199 10843.229 - 10902.807: 65.9477% ( 285) 00:11:56.199 10902.807 - 10962.385: 68.0367% ( 246) 00:11:56.199 10962.385 - 11021.964: 69.9134% ( 221) 00:11:56.199 11021.964 - 11081.542: 71.8240% ( 225) 00:11:56.199 11081.542 - 11141.120: 73.6413% ( 214) 00:11:56.199 11141.120 - 11200.698: 75.7388% ( 247) 00:11:56.199 11200.698 - 11260.276: 77.8448% ( 248) 00:11:56.199 11260.276 - 11319.855: 79.5686% ( 203) 00:11:56.199 11319.855 - 11379.433: 81.3264% ( 207) 00:11:56.199 11379.433 - 11439.011: 83.2286% ( 224) 00:11:56.199 11439.011 - 11498.589: 85.1223% ( 223) 00:11:56.199 11498.589 - 11558.167: 86.5744% ( 171) 00:11:56.199 11558.167 - 11617.745: 87.9416% ( 161) 00:11:56.199 11617.745 - 11677.324: 89.0285% ( 128) 00:11:56.199 11677.324 - 11736.902: 90.2768% ( 147) 00:11:56.199 11736.902 - 11796.480: 91.5082% ( 145) 00:11:56.199 11796.480 - 11856.058: 92.6376% ( 133) 00:11:56.199 11856.058 - 11915.636: 93.5717% ( 110) 00:11:56.199 11915.636 - 11975.215: 94.3869% ( 96) 00:11:56.199 11975.215 - 12034.793: 95.0917% ( 83) 00:11:56.199 12034.793 - 12094.371: 95.6097% ( 61) 00:11:56.199 12094.371 - 12153.949: 95.9579% ( 41) 00:11:56.199 12153.949 - 12213.527: 96.2976% ( 40) 00:11:56.199 12213.527 - 12273.105: 96.6118% ( 37) 00:11:56.199 12273.105 - 12332.684: 96.8920% ( 33) 00:11:56.199 12332.684 - 12392.262: 97.1128% ( 26) 00:11:56.199 12392.262 - 12451.840: 97.3251% ( 25) 00:11:56.199 12451.840 - 12511.418: 97.6308% ( 36) 00:11:56.199 12511.418 - 12570.996: 97.8346% ( 24) 00:11:56.199 12570.996 - 12630.575: 97.9874% ( 18) 00:11:56.199 12630.575 - 12690.153: 98.0554% ( 8) 00:11:56.199 12690.153 - 12749.731: 98.1658% ( 13) 00:11:56.199 12749.731 - 12809.309: 98.2762% ( 13) 00:11:56.199 12809.309 - 12868.887: 98.3865% ( 13) 00:11:56.199 12868.887 - 12928.465: 98.5054% ( 14) 00:11:56.199 12928.465 - 12988.044: 98.6328% ( 15) 00:11:56.199 12988.044 - 13047.622: 98.7177% ( 10) 00:11:56.199 13047.622 - 13107.200: 98.7602% ( 5) 00:11:56.199 13107.200 - 13166.778: 98.7942% ( 4) 00:11:56.199 13166.778 - 13226.356: 98.8196% ( 3) 00:11:56.199 13226.356 - 13285.935: 98.8366% ( 2) 00:11:56.199 13285.935 - 13345.513: 98.8536% ( 2) 00:11:56.199 13345.513 - 13405.091: 98.8791% ( 3) 00:11:56.199 13405.091 - 13464.669: 98.9046% ( 3) 00:11:56.199 13464.669 - 13524.247: 98.9130% ( 1) 00:11:56.199 26095.244 - 26214.400: 98.9300% ( 2) 00:11:56.199 26214.400 - 26333.556: 98.9810% ( 6) 00:11:56.199 26333.556 - 26452.713: 99.0234% ( 5) 00:11:56.199 26452.713 - 26571.869: 99.0744% ( 6) 00:11:56.199 26571.869 - 26691.025: 99.1084% ( 4) 00:11:56.199 26691.025 - 26810.182: 99.1338% ( 3) 00:11:56.199 26810.182 - 26929.338: 99.1678% ( 4) 00:11:56.199 26929.338 - 27048.495: 99.2188% ( 6) 00:11:56.199 27048.495 - 27167.651: 99.2527% ( 4) 00:11:56.199 27167.651 - 27286.807: 99.2952% ( 5) 00:11:56.199 27286.807 - 27405.964: 99.3376% ( 5) 00:11:56.199 27405.964 - 27525.120: 99.3716% ( 4) 00:11:56.199 27525.120 - 27644.276: 99.4056% ( 4) 00:11:56.199 27644.276 - 27763.433: 99.4480% ( 5) 00:11:56.199 27763.433 - 27882.589: 99.4565% ( 1) 00:11:56.199 33363.782 - 33602.095: 99.5329% ( 9) 00:11:56.199 33602.095 - 33840.407: 99.6264% ( 11) 00:11:56.199 33840.407 - 34078.720: 99.6943% ( 8) 00:11:56.199 34078.720 - 34317.033: 99.7962% ( 12) 00:11:56.199 34317.033 - 34555.345: 99.8811% ( 10) 00:11:56.199 34555.345 - 34793.658: 99.9745% ( 11) 00:11:56.199 34793.658 - 35031.971: 100.0000% ( 3) 00:11:56.199 00:11:56.199 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:56.199 ============================================================================== 00:11:56.199 Range in us Cumulative IO count 00:11:56.199 6613.178 - 6642.967: 0.0085% ( 1) 00:11:56.199 6821.702 - 6851.491: 0.0425% ( 4) 00:11:56.199 6851.491 - 6881.280: 0.0764% ( 4) 00:11:56.199 6881.280 - 6911.069: 0.1104% ( 4) 00:11:56.199 6911.069 - 6940.858: 0.1529% ( 5) 00:11:56.199 6940.858 - 6970.647: 0.2293% ( 9) 00:11:56.199 6970.647 - 7000.436: 0.3312% ( 12) 00:11:56.199 7000.436 - 7030.225: 0.3482% ( 2) 00:11:56.199 7030.225 - 7060.015: 0.3651% ( 2) 00:11:56.199 7060.015 - 7089.804: 0.3736% ( 1) 00:11:56.199 7089.804 - 7119.593: 0.3906% ( 2) 00:11:56.199 7119.593 - 7149.382: 0.3991% ( 1) 00:11:56.199 7149.382 - 7179.171: 0.4161% ( 2) 00:11:56.199 7179.171 - 7208.960: 0.4331% ( 2) 00:11:56.199 7208.960 - 7238.749: 0.4416% ( 1) 00:11:56.199 7238.749 - 7268.538: 0.4586% ( 2) 00:11:56.199 7268.538 - 7298.327: 0.4671% ( 1) 00:11:56.199 7298.327 - 7328.116: 0.4840% ( 2) 00:11:56.199 7328.116 - 7357.905: 0.4925% ( 1) 00:11:56.199 7357.905 - 7387.695: 0.5095% ( 2) 00:11:56.199 7387.695 - 7417.484: 0.5180% ( 1) 00:11:56.199 7417.484 - 7447.273: 0.5350% ( 2) 00:11:56.199 7447.273 - 7477.062: 0.5435% ( 1) 00:11:56.199 8400.524 - 8460.102: 0.5520% ( 1) 00:11:56.199 8519.680 - 8579.258: 0.5605% ( 1) 00:11:56.199 8698.415 - 8757.993: 0.5944% ( 4) 00:11:56.199 8757.993 - 8817.571: 0.6963% ( 12) 00:11:56.199 8817.571 - 8877.149: 0.9001% ( 24) 00:11:56.199 8877.149 - 8936.727: 1.1294% ( 27) 00:11:56.199 8936.727 - 8996.305: 1.3077% ( 21) 00:11:56.199 8996.305 - 9055.884: 1.5795% ( 32) 00:11:56.199 9055.884 - 9115.462: 1.7918% ( 25) 00:11:56.199 9115.462 - 9175.040: 1.9531% ( 19) 00:11:56.199 9175.040 - 9234.618: 2.1399% ( 22) 00:11:56.199 9234.618 - 9294.196: 2.4032% ( 31) 00:11:56.199 9294.196 - 9353.775: 2.6834% ( 33) 00:11:56.199 9353.775 - 9413.353: 3.1250% ( 52) 00:11:56.199 9413.353 - 9472.931: 3.6685% ( 64) 00:11:56.199 9472.931 - 9532.509: 4.1440% ( 56) 00:11:56.199 9532.509 - 9592.087: 4.6450% ( 59) 00:11:56.199 9592.087 - 9651.665: 5.1376% ( 58) 00:11:56.199 9651.665 - 9711.244: 5.6980% ( 66) 00:11:56.199 9711.244 - 9770.822: 6.6067% ( 107) 00:11:56.199 9770.822 - 9830.400: 7.9823% ( 162) 00:11:56.199 9830.400 - 9889.978: 9.6722% ( 199) 00:11:56.199 9889.978 - 9949.556: 11.6168% ( 229) 00:11:56.199 9949.556 - 10009.135: 14.4022% ( 328) 00:11:56.199 10009.135 - 10068.713: 17.8074% ( 401) 00:11:56.199 10068.713 - 10128.291: 21.5353% ( 439) 00:11:56.199 10128.291 - 10187.869: 24.8302% ( 388) 00:11:56.199 10187.869 - 10247.447: 28.6515% ( 450) 00:11:56.199 10247.447 - 10307.025: 32.1162% ( 408) 00:11:56.199 10307.025 - 10366.604: 35.4704% ( 395) 00:11:56.199 10366.604 - 10426.182: 39.3342% ( 455) 00:11:56.199 10426.182 - 10485.760: 43.3084% ( 468) 00:11:56.199 10485.760 - 10545.338: 46.9599% ( 430) 00:11:56.199 10545.338 - 10604.916: 50.7982% ( 452) 00:11:56.199 10604.916 - 10664.495: 54.0251% ( 380) 00:11:56.199 10664.495 - 10724.073: 57.0822% ( 360) 00:11:56.199 10724.073 - 10783.651: 60.1562% ( 362) 00:11:56.199 10783.651 - 10843.229: 62.7463% ( 305) 00:11:56.199 10843.229 - 10902.807: 64.9796% ( 263) 00:11:56.199 10902.807 - 10962.385: 67.3234% ( 276) 00:11:56.199 10962.385 - 11021.964: 69.3614% ( 240) 00:11:56.199 11021.964 - 11081.542: 71.2806% ( 226) 00:11:56.199 11081.542 - 11141.120: 73.3356% ( 242) 00:11:56.199 11141.120 - 11200.698: 75.8662% ( 298) 00:11:56.199 11200.698 - 11260.276: 78.0401% ( 256) 00:11:56.199 11260.276 - 11319.855: 79.7300% ( 199) 00:11:56.199 11319.855 - 11379.433: 81.4793% ( 206) 00:11:56.199 11379.433 - 11439.011: 82.9993% ( 179) 00:11:56.199 11439.011 - 11498.589: 84.6892% ( 199) 00:11:56.199 11498.589 - 11558.167: 85.9800% ( 152) 00:11:56.199 11558.167 - 11617.745: 87.4321% ( 171) 00:11:56.199 11617.745 - 11677.324: 88.9861% ( 183) 00:11:56.199 11677.324 - 11736.902: 90.0136% ( 121) 00:11:56.199 11736.902 - 11796.480: 91.3298% ( 155) 00:11:56.199 11796.480 - 11856.058: 92.5442% ( 143) 00:11:56.199 11856.058 - 11915.636: 93.4952% ( 112) 00:11:56.199 11915.636 - 11975.215: 94.2171% ( 85) 00:11:56.199 11975.215 - 12034.793: 94.7860% ( 67) 00:11:56.199 12034.793 - 12094.371: 95.4229% ( 75) 00:11:56.199 12094.371 - 12153.949: 95.9239% ( 59) 00:11:56.199 12153.949 - 12213.527: 96.3315% ( 48) 00:11:56.199 12213.527 - 12273.105: 96.6033% ( 32) 00:11:56.199 12273.105 - 12332.684: 96.8750% ( 32) 00:11:56.199 12332.684 - 12392.262: 97.2486% ( 44) 00:11:56.199 12392.262 - 12451.840: 97.5374% ( 34) 00:11:56.199 12451.840 - 12511.418: 97.7666% ( 27) 00:11:56.199 12511.418 - 12570.996: 97.8855% ( 14) 00:11:56.199 12570.996 - 12630.575: 98.0129% ( 15) 00:11:56.199 12630.575 - 12690.153: 98.1403% ( 15) 00:11:56.199 12690.153 - 12749.731: 98.2082% ( 8) 00:11:56.199 12749.731 - 12809.309: 98.3016% ( 11) 00:11:56.200 12809.309 - 12868.887: 98.3950% ( 11) 00:11:56.200 12868.887 - 12928.465: 98.4885% ( 11) 00:11:56.200 12928.465 - 12988.044: 98.5564% ( 8) 00:11:56.200 12988.044 - 13047.622: 98.6158% ( 7) 00:11:56.200 13047.622 - 13107.200: 98.6753% ( 7) 00:11:56.200 13107.200 - 13166.778: 98.6923% ( 2) 00:11:56.200 13166.778 - 13226.356: 98.7177% ( 3) 00:11:56.200 13226.356 - 13285.935: 98.7347% ( 2) 00:11:56.200 13285.935 - 13345.513: 98.7517% ( 2) 00:11:56.200 13345.513 - 13405.091: 98.7772% ( 3) 00:11:56.200 13405.091 - 13464.669: 98.7942% ( 2) 00:11:56.200 13464.669 - 13524.247: 98.8111% ( 2) 00:11:56.200 13524.247 - 13583.825: 98.8281% ( 2) 00:11:56.200 13583.825 - 13643.404: 98.8536% ( 3) 00:11:56.200 13643.404 - 13702.982: 98.8706% ( 2) 00:11:56.200 13702.982 - 13762.560: 98.8876% ( 2) 00:11:56.200 13762.560 - 13822.138: 98.9130% ( 3) 00:11:56.200 26095.244 - 26214.400: 98.9470% ( 4) 00:11:56.200 26214.400 - 26333.556: 98.9810% ( 4) 00:11:56.200 26333.556 - 26452.713: 99.0149% ( 4) 00:11:56.200 26452.713 - 26571.869: 99.0659% ( 6) 00:11:56.200 26571.869 - 26691.025: 99.1084% ( 5) 00:11:56.200 26691.025 - 26810.182: 99.1508% ( 5) 00:11:56.200 26810.182 - 26929.338: 99.1933% ( 5) 00:11:56.200 26929.338 - 27048.495: 99.2188% ( 3) 00:11:56.200 27048.495 - 27167.651: 99.2612% ( 5) 00:11:56.200 27167.651 - 27286.807: 99.3037% ( 5) 00:11:56.200 27286.807 - 27405.964: 99.3376% ( 4) 00:11:56.200 27405.964 - 27525.120: 99.3801% ( 5) 00:11:56.200 27525.120 - 27644.276: 99.4310% ( 6) 00:11:56.200 27644.276 - 27763.433: 99.4565% ( 3) 00:11:56.200 32887.156 - 33125.469: 99.5075% ( 6) 00:11:56.200 33125.469 - 33363.782: 99.6009% ( 11) 00:11:56.200 33363.782 - 33602.095: 99.6773% ( 9) 00:11:56.200 33602.095 - 33840.407: 99.7707% ( 11) 00:11:56.200 33840.407 - 34078.720: 99.8556% ( 10) 00:11:56.200 34078.720 - 34317.033: 99.9406% ( 10) 00:11:56.200 34317.033 - 34555.345: 100.0000% ( 7) 00:11:56.200 00:11:56.200 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:56.200 ============================================================================== 00:11:56.200 Range in us Cumulative IO count 00:11:56.200 6523.811 - 6553.600: 0.0085% ( 1) 00:11:56.200 6553.600 - 6583.389: 0.0425% ( 4) 00:11:56.200 6583.389 - 6613.178: 0.0679% ( 3) 00:11:56.200 6613.178 - 6642.967: 0.0934% ( 3) 00:11:56.200 6642.967 - 6672.756: 0.1274% ( 4) 00:11:56.200 6672.756 - 6702.545: 0.1783% ( 6) 00:11:56.200 6702.545 - 6732.335: 0.2378% ( 7) 00:11:56.200 6732.335 - 6762.124: 0.3142% ( 9) 00:11:56.200 6762.124 - 6791.913: 0.3567% ( 5) 00:11:56.200 6791.913 - 6821.702: 0.3651% ( 1) 00:11:56.200 6821.702 - 6851.491: 0.3821% ( 2) 00:11:56.200 6851.491 - 6881.280: 0.3991% ( 2) 00:11:56.200 6881.280 - 6911.069: 0.4076% ( 1) 00:11:56.200 6911.069 - 6940.858: 0.4246% ( 2) 00:11:56.200 6940.858 - 6970.647: 0.4331% ( 1) 00:11:56.200 6970.647 - 7000.436: 0.4501% ( 2) 00:11:56.200 7000.436 - 7030.225: 0.4586% ( 1) 00:11:56.200 7030.225 - 7060.015: 0.4671% ( 1) 00:11:56.200 7060.015 - 7089.804: 0.4840% ( 2) 00:11:56.200 7089.804 - 7119.593: 0.4925% ( 1) 00:11:56.200 7119.593 - 7149.382: 0.5095% ( 2) 00:11:56.200 7149.382 - 7179.171: 0.5180% ( 1) 00:11:56.200 7179.171 - 7208.960: 0.5350% ( 2) 00:11:56.200 7208.960 - 7238.749: 0.5435% ( 1) 00:11:56.200 8281.367 - 8340.945: 0.5520% ( 1) 00:11:56.200 8460.102 - 8519.680: 0.5690% ( 2) 00:11:56.200 8519.680 - 8579.258: 0.5944% ( 3) 00:11:56.200 8579.258 - 8638.836: 0.6369% ( 5) 00:11:56.200 8638.836 - 8698.415: 0.8067% ( 20) 00:11:56.200 8698.415 - 8757.993: 0.9341% ( 15) 00:11:56.200 8757.993 - 8817.571: 0.9935% ( 7) 00:11:56.200 8817.571 - 8877.149: 1.0700% ( 9) 00:11:56.200 8877.149 - 8936.727: 1.1719% ( 12) 00:11:56.200 8936.727 - 8996.305: 1.2738% ( 12) 00:11:56.200 8996.305 - 9055.884: 1.4351% ( 19) 00:11:56.200 9055.884 - 9115.462: 1.6984% ( 31) 00:11:56.200 9115.462 - 9175.040: 1.9022% ( 24) 00:11:56.200 9175.040 - 9234.618: 2.0550% ( 18) 00:11:56.200 9234.618 - 9294.196: 2.3268% ( 32) 00:11:56.200 9294.196 - 9353.775: 2.6070% ( 33) 00:11:56.200 9353.775 - 9413.353: 2.9891% ( 45) 00:11:56.200 9413.353 - 9472.931: 3.4732% ( 57) 00:11:56.200 9472.931 - 9532.509: 4.0676% ( 70) 00:11:56.200 9532.509 - 9592.087: 4.6281% ( 66) 00:11:56.200 9592.087 - 9651.665: 5.3159% ( 81) 00:11:56.200 9651.665 - 9711.244: 5.7660% ( 53) 00:11:56.200 9711.244 - 9770.822: 6.6746% ( 107) 00:11:56.200 9770.822 - 9830.400: 7.9738% ( 153) 00:11:56.200 9830.400 - 9889.978: 9.6128% ( 193) 00:11:56.200 9889.978 - 9949.556: 11.7357% ( 250) 00:11:56.200 9949.556 - 10009.135: 14.3088% ( 303) 00:11:56.200 10009.135 - 10068.713: 17.6036% ( 388) 00:11:56.200 10068.713 - 10128.291: 20.8050% ( 377) 00:11:56.200 10128.291 - 10187.869: 24.4480% ( 429) 00:11:56.200 10187.869 - 10247.447: 28.3713% ( 462) 00:11:56.200 10247.447 - 10307.025: 32.3709% ( 471) 00:11:56.200 10307.025 - 10366.604: 35.9290% ( 419) 00:11:56.200 10366.604 - 10426.182: 39.7588% ( 451) 00:11:56.200 10426.182 - 10485.760: 43.9368% ( 492) 00:11:56.200 10485.760 - 10545.338: 47.2232% ( 387) 00:11:56.200 10545.338 - 10604.916: 50.6539% ( 404) 00:11:56.200 10604.916 - 10664.495: 53.4477% ( 329) 00:11:56.200 10664.495 - 10724.073: 56.3094% ( 337) 00:11:56.200 10724.073 - 10783.651: 59.1202% ( 331) 00:11:56.200 10783.651 - 10843.229: 62.0329% ( 343) 00:11:56.200 10843.229 - 10902.807: 64.9202% ( 340) 00:11:56.200 10902.807 - 10962.385: 67.6546% ( 322) 00:11:56.200 10962.385 - 11021.964: 70.1512% ( 294) 00:11:56.200 11021.964 - 11081.542: 72.4779% ( 274) 00:11:56.200 11081.542 - 11141.120: 74.9321% ( 289) 00:11:56.200 11141.120 - 11200.698: 76.9871% ( 242) 00:11:56.200 11200.698 - 11260.276: 78.8638% ( 221) 00:11:56.200 11260.276 - 11319.855: 80.4688% ( 189) 00:11:56.200 11319.855 - 11379.433: 81.8020% ( 157) 00:11:56.200 11379.433 - 11439.011: 83.3645% ( 184) 00:11:56.200 11439.011 - 11498.589: 84.7571% ( 164) 00:11:56.200 11498.589 - 11558.167: 86.4555% ( 200) 00:11:56.200 11558.167 - 11617.745: 87.9331% ( 174) 00:11:56.200 11617.745 - 11677.324: 89.2323% ( 153) 00:11:56.200 11677.324 - 11736.902: 90.5571% ( 156) 00:11:56.200 11736.902 - 11796.480: 91.6865% ( 133) 00:11:56.200 11796.480 - 11856.058: 92.7395% ( 124) 00:11:56.200 11856.058 - 11915.636: 93.4698% ( 86) 00:11:56.200 11915.636 - 11975.215: 94.0048% ( 63) 00:11:56.200 11975.215 - 12034.793: 94.5652% ( 66) 00:11:56.200 12034.793 - 12094.371: 95.1172% ( 65) 00:11:56.200 12094.371 - 12153.949: 95.8050% ( 81) 00:11:56.200 12153.949 - 12213.527: 96.2891% ( 57) 00:11:56.200 12213.527 - 12273.105: 96.8071% ( 61) 00:11:56.200 12273.105 - 12332.684: 97.1298% ( 38) 00:11:56.200 12332.684 - 12392.262: 97.3760% ( 29) 00:11:56.200 12392.262 - 12451.840: 97.5713% ( 23) 00:11:56.200 12451.840 - 12511.418: 97.7497% ( 21) 00:11:56.200 12511.418 - 12570.996: 97.9365% ( 22) 00:11:56.200 12570.996 - 12630.575: 98.0978% ( 19) 00:11:56.200 12630.575 - 12690.153: 98.1912% ( 11) 00:11:56.200 12690.153 - 12749.731: 98.2677% ( 9) 00:11:56.200 12749.731 - 12809.309: 98.3016% ( 4) 00:11:56.200 12809.309 - 12868.887: 98.3101% ( 1) 00:11:56.200 12868.887 - 12928.465: 98.3781% ( 8) 00:11:56.200 12928.465 - 12988.044: 98.4460% ( 8) 00:11:56.200 12988.044 - 13047.622: 98.5139% ( 8) 00:11:56.200 13047.622 - 13107.200: 98.5649% ( 6) 00:11:56.200 13107.200 - 13166.778: 98.6158% ( 6) 00:11:56.200 13166.778 - 13226.356: 98.7007% ( 10) 00:11:56.200 13226.356 - 13285.935: 98.7517% ( 6) 00:11:56.200 13285.935 - 13345.513: 98.7687% ( 2) 00:11:56.200 13345.513 - 13405.091: 98.7857% ( 2) 00:11:56.200 13405.091 - 13464.669: 98.8111% ( 3) 00:11:56.200 13464.669 - 13524.247: 98.8281% ( 2) 00:11:56.200 13524.247 - 13583.825: 98.8536% ( 3) 00:11:56.200 13583.825 - 13643.404: 98.8706% ( 2) 00:11:56.200 13643.404 - 13702.982: 98.8876% ( 2) 00:11:56.200 13702.982 - 13762.560: 98.9130% ( 3) 00:11:56.200 25618.618 - 25737.775: 98.9470% ( 4) 00:11:56.200 25737.775 - 25856.931: 98.9980% ( 6) 00:11:56.200 25856.931 - 25976.087: 99.0574% ( 7) 00:11:56.200 25976.087 - 26095.244: 99.1423% ( 10) 00:11:56.200 26095.244 - 26214.400: 99.2018% ( 7) 00:11:56.200 26214.400 - 26333.556: 99.2442% ( 5) 00:11:56.200 26333.556 - 26452.713: 99.2782% ( 4) 00:11:56.200 26452.713 - 26571.869: 99.3122% ( 4) 00:11:56.200 26571.869 - 26691.025: 99.3376% ( 3) 00:11:56.200 26691.025 - 26810.182: 99.3801% ( 5) 00:11:56.200 26810.182 - 26929.338: 99.4141% ( 4) 00:11:56.200 26929.338 - 27048.495: 99.4395% ( 3) 00:11:56.200 27048.495 - 27167.651: 99.4565% ( 2) 00:11:56.200 30980.655 - 31218.967: 99.4735% ( 2) 00:11:56.200 31218.967 - 31457.280: 99.4905% ( 2) 00:11:56.200 31933.905 - 32172.218: 99.5075% ( 2) 00:11:56.200 32172.218 - 32410.531: 99.5839% ( 9) 00:11:56.200 32410.531 - 32648.844: 99.6688% ( 10) 00:11:56.200 32648.844 - 32887.156: 99.7622% ( 11) 00:11:56.200 32887.156 - 33125.469: 99.8556% ( 11) 00:11:56.200 33125.469 - 33363.782: 99.9406% ( 10) 00:11:56.200 33363.782 - 33602.095: 100.0000% ( 7) 00:11:56.200 00:11:56.200 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:56.200 ============================================================================== 00:11:56.200 Range in us Cumulative IO count 00:11:56.200 6285.498 - 6315.287: 0.0170% ( 2) 00:11:56.201 6315.287 - 6345.076: 0.0510% ( 4) 00:11:56.201 6345.076 - 6374.865: 0.0849% ( 4) 00:11:56.201 6374.865 - 6404.655: 0.1189% ( 4) 00:11:56.201 6404.655 - 6434.444: 0.1359% ( 2) 00:11:56.201 6434.444 - 6464.233: 0.2463% ( 13) 00:11:56.201 6464.233 - 6494.022: 0.3567% ( 13) 00:11:56.201 6494.022 - 6523.811: 0.3821% ( 3) 00:11:56.201 6523.811 - 6553.600: 0.3991% ( 2) 00:11:56.201 6553.600 - 6583.389: 0.4076% ( 1) 00:11:56.201 6583.389 - 6613.178: 0.4246% ( 2) 00:11:56.201 6613.178 - 6642.967: 0.4331% ( 1) 00:11:56.201 6642.967 - 6672.756: 0.4501% ( 2) 00:11:56.201 6672.756 - 6702.545: 0.4586% ( 1) 00:11:56.201 6702.545 - 6732.335: 0.4755% ( 2) 00:11:56.201 6732.335 - 6762.124: 0.4840% ( 1) 00:11:56.201 6762.124 - 6791.913: 0.5010% ( 2) 00:11:56.201 6791.913 - 6821.702: 0.5095% ( 1) 00:11:56.201 6821.702 - 6851.491: 0.5265% ( 2) 00:11:56.201 6851.491 - 6881.280: 0.5350% ( 1) 00:11:56.201 6881.280 - 6911.069: 0.5435% ( 1) 00:11:56.201 8281.367 - 8340.945: 0.5520% ( 1) 00:11:56.201 8340.945 - 8400.524: 0.6454% ( 11) 00:11:56.201 8400.524 - 8460.102: 0.7218% ( 9) 00:11:56.201 8460.102 - 8519.680: 0.8492% ( 15) 00:11:56.201 8519.680 - 8579.258: 0.9086% ( 7) 00:11:56.201 8579.258 - 8638.836: 0.9681% ( 7) 00:11:56.201 8638.836 - 8698.415: 0.9935% ( 3) 00:11:56.201 8698.415 - 8757.993: 1.0275% ( 4) 00:11:56.201 8757.993 - 8817.571: 1.0445% ( 2) 00:11:56.201 8817.571 - 8877.149: 1.1039% ( 7) 00:11:56.201 8877.149 - 8936.727: 1.1634% ( 7) 00:11:56.201 8936.727 - 8996.305: 1.2313% ( 8) 00:11:56.201 8996.305 - 9055.884: 1.3417% ( 13) 00:11:56.201 9055.884 - 9115.462: 1.5710% ( 27) 00:11:56.201 9115.462 - 9175.040: 1.8427% ( 32) 00:11:56.201 9175.040 - 9234.618: 2.1399% ( 35) 00:11:56.201 9234.618 - 9294.196: 2.4796% ( 40) 00:11:56.201 9294.196 - 9353.775: 2.8957% ( 49) 00:11:56.201 9353.775 - 9413.353: 3.2694% ( 44) 00:11:56.201 9413.353 - 9472.931: 3.7194% ( 53) 00:11:56.201 9472.931 - 9532.509: 4.2120% ( 58) 00:11:56.201 9532.509 - 9592.087: 4.7979% ( 69) 00:11:56.201 9592.087 - 9651.665: 5.3159% ( 61) 00:11:56.201 9651.665 - 9711.244: 6.0292% ( 84) 00:11:56.201 9711.244 - 9770.822: 6.8954% ( 102) 00:11:56.201 9770.822 - 9830.400: 8.3645% ( 173) 00:11:56.201 9830.400 - 9889.978: 10.0374% ( 197) 00:11:56.201 9889.978 - 9949.556: 12.2622% ( 262) 00:11:56.201 9949.556 - 10009.135: 14.9202% ( 313) 00:11:56.201 10009.135 - 10068.713: 18.0027% ( 363) 00:11:56.201 10068.713 - 10128.291: 21.1532% ( 371) 00:11:56.201 10128.291 - 10187.869: 24.5584% ( 401) 00:11:56.201 10187.869 - 10247.447: 28.2439% ( 434) 00:11:56.201 10247.447 - 10307.025: 31.9803% ( 440) 00:11:56.201 10307.025 - 10366.604: 35.7507% ( 444) 00:11:56.201 10366.604 - 10426.182: 39.8692% ( 485) 00:11:56.201 10426.182 - 10485.760: 43.4952% ( 427) 00:11:56.201 10485.760 - 10545.338: 46.9684% ( 409) 00:11:56.201 10545.338 - 10604.916: 50.6199% ( 430) 00:11:56.201 10604.916 - 10664.495: 53.7449% ( 368) 00:11:56.201 10664.495 - 10724.073: 56.9463% ( 377) 00:11:56.201 10724.073 - 10783.651: 59.4344% ( 293) 00:11:56.201 10783.651 - 10843.229: 61.8546% ( 285) 00:11:56.201 10843.229 - 10902.807: 64.2833% ( 286) 00:11:56.201 10902.807 - 10962.385: 67.0177% ( 322) 00:11:56.201 10962.385 - 11021.964: 69.5992% ( 304) 00:11:56.201 11021.964 - 11081.542: 72.0194% ( 285) 00:11:56.201 11081.542 - 11141.120: 74.2272% ( 260) 00:11:56.201 11141.120 - 11200.698: 76.3417% ( 249) 00:11:56.201 11200.698 - 11260.276: 78.2863% ( 229) 00:11:56.201 11260.276 - 11319.855: 79.9253% ( 193) 00:11:56.201 11319.855 - 11379.433: 81.6746% ( 206) 00:11:56.201 11379.433 - 11439.011: 83.4069% ( 204) 00:11:56.201 11439.011 - 11498.589: 85.1987% ( 211) 00:11:56.201 11498.589 - 11558.167: 86.7188% ( 179) 00:11:56.201 11558.167 - 11617.745: 88.2218% ( 177) 00:11:56.201 11617.745 - 11677.324: 89.5041% ( 151) 00:11:56.201 11677.324 - 11736.902: 90.9307% ( 168) 00:11:56.201 11736.902 - 11796.480: 91.8478% ( 108) 00:11:56.201 11796.480 - 11856.058: 92.9008% ( 124) 00:11:56.201 11856.058 - 11915.636: 93.7160% ( 96) 00:11:56.201 11915.636 - 11975.215: 94.2255% ( 60) 00:11:56.201 11975.215 - 12034.793: 94.8794% ( 77) 00:11:56.201 12034.793 - 12094.371: 95.3125% ( 51) 00:11:56.201 12094.371 - 12153.949: 95.6776% ( 43) 00:11:56.201 12153.949 - 12213.527: 96.1447% ( 55) 00:11:56.201 12213.527 - 12273.105: 96.4504% ( 36) 00:11:56.201 12273.105 - 12332.684: 96.7731% ( 38) 00:11:56.201 12332.684 - 12392.262: 97.0363% ( 31) 00:11:56.201 12392.262 - 12451.840: 97.2656% ( 27) 00:11:56.201 12451.840 - 12511.418: 97.5374% ( 32) 00:11:56.201 12511.418 - 12570.996: 97.8685% ( 39) 00:11:56.201 12570.996 - 12630.575: 98.0469% ( 21) 00:11:56.201 12630.575 - 12690.153: 98.1573% ( 13) 00:11:56.201 12690.153 - 12749.731: 98.2762% ( 14) 00:11:56.201 12749.731 - 12809.309: 98.3781% ( 12) 00:11:56.201 12809.309 - 12868.887: 98.4715% ( 11) 00:11:56.201 12868.887 - 12928.465: 98.5649% ( 11) 00:11:56.201 12928.465 - 12988.044: 98.6328% ( 8) 00:11:56.201 12988.044 - 13047.622: 98.7092% ( 9) 00:11:56.201 13047.622 - 13107.200: 98.7347% ( 3) 00:11:56.201 13107.200 - 13166.778: 98.7602% ( 3) 00:11:56.201 13166.778 - 13226.356: 98.7772% ( 2) 00:11:56.201 13226.356 - 13285.935: 98.8026% ( 3) 00:11:56.201 13285.935 - 13345.513: 98.8196% ( 2) 00:11:56.201 13345.513 - 13405.091: 98.8451% ( 3) 00:11:56.201 13405.091 - 13464.669: 98.8621% ( 2) 00:11:56.201 13464.669 - 13524.247: 98.8791% ( 2) 00:11:56.201 13524.247 - 13583.825: 98.9046% ( 3) 00:11:56.201 13583.825 - 13643.404: 98.9130% ( 1) 00:11:56.201 24903.680 - 25022.836: 98.9470% ( 4) 00:11:56.201 25022.836 - 25141.993: 99.0404% ( 11) 00:11:56.201 25141.993 - 25261.149: 99.1423% ( 12) 00:11:56.201 25261.149 - 25380.305: 99.2188% ( 9) 00:11:56.201 25380.305 - 25499.462: 99.2612% ( 5) 00:11:56.201 25499.462 - 25618.618: 99.2952% ( 4) 00:11:56.201 25618.618 - 25737.775: 99.3376% ( 5) 00:11:56.201 25737.775 - 25856.931: 99.3546% ( 2) 00:11:56.201 25856.931 - 25976.087: 99.3971% ( 5) 00:11:56.201 25976.087 - 26095.244: 99.4226% ( 3) 00:11:56.201 26095.244 - 26214.400: 99.4565% ( 4) 00:11:56.201 30027.404 - 30146.560: 99.4650% ( 1) 00:11:56.201 30146.560 - 30265.716: 99.4990% ( 4) 00:11:56.201 30980.655 - 31218.967: 99.5584% ( 7) 00:11:56.201 31218.967 - 31457.280: 99.6433% ( 10) 00:11:56.201 31457.280 - 31695.593: 99.7283% ( 10) 00:11:56.201 31695.593 - 31933.905: 99.8217% ( 11) 00:11:56.201 31933.905 - 32172.218: 99.9151% ( 11) 00:11:56.201 32172.218 - 32410.531: 99.9915% ( 9) 00:11:56.201 32410.531 - 32648.844: 100.0000% ( 1) 00:11:56.201 00:11:56.201 15:38:44 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:56.201 00:11:56.201 real 0m2.670s 00:11:56.201 user 0m2.266s 00:11:56.201 sys 0m0.290s 00:11:56.201 15:38:44 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.201 15:38:44 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 ************************************ 00:11:56.201 END TEST nvme_perf 00:11:56.201 ************************************ 00:11:56.201 15:38:44 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:56.201 15:38:44 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.201 15:38:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.201 15:38:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.201 ************************************ 00:11:56.201 START TEST nvme_hello_world 00:11:56.201 ************************************ 00:11:56.201 15:38:44 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:56.460 Initializing NVMe Controllers 00:11:56.460 Attached to 0000:00:13.0 00:11:56.460 Namespace ID: 1 size: 1GB 00:11:56.460 Attached to 0000:00:10.0 00:11:56.460 Namespace ID: 1 size: 6GB 00:11:56.460 Attached to 0000:00:11.0 00:11:56.460 Namespace ID: 1 size: 5GB 00:11:56.460 Attached to 0000:00:12.0 00:11:56.460 Namespace ID: 1 size: 4GB 00:11:56.460 Namespace ID: 2 size: 4GB 00:11:56.460 Namespace ID: 3 size: 4GB 00:11:56.460 Initialization complete. 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.460 INFO: using host memory buffer for IO 00:11:56.460 Hello world! 00:11:56.717 00:11:56.717 real 0m0.299s 00:11:56.717 user 0m0.103s 00:11:56.717 sys 0m0.142s 00:11:56.717 ************************************ 00:11:56.717 END TEST nvme_hello_world 00:11:56.717 ************************************ 00:11:56.717 15:38:45 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.717 15:38:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:56.718 15:38:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:56.718 15:38:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.718 15:38:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.718 15:38:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.718 ************************************ 00:11:56.718 START TEST nvme_sgl 00:11:56.718 ************************************ 00:11:56.718 15:38:45 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:56.976 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:56.976 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:56.976 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:56.976 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:56.976 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:56.976 NVMe Readv/Writev Request test 00:11:56.976 Attached to 0000:00:13.0 00:11:56.976 Attached to 0000:00:10.0 00:11:56.976 Attached to 0000:00:11.0 00:11:56.976 Attached to 0000:00:12.0 00:11:56.976 0000:00:10.0: build_io_request_2 test passed 00:11:56.976 0000:00:10.0: build_io_request_4 test passed 00:11:56.976 0000:00:10.0: build_io_request_5 test passed 00:11:56.976 0000:00:10.0: build_io_request_6 test passed 00:11:56.976 0000:00:10.0: build_io_request_7 test passed 00:11:56.976 0000:00:10.0: build_io_request_10 test passed 00:11:56.976 0000:00:11.0: build_io_request_2 test passed 00:11:56.976 0000:00:11.0: build_io_request_4 test passed 00:11:56.976 0000:00:11.0: build_io_request_5 test passed 00:11:56.976 0000:00:11.0: build_io_request_6 test passed 00:11:56.976 0000:00:11.0: build_io_request_7 test passed 00:11:56.976 0000:00:11.0: build_io_request_10 test passed 00:11:56.976 Cleaning up... 00:11:56.976 00:11:56.976 real 0m0.358s 00:11:56.976 user 0m0.184s 00:11:56.976 sys 0m0.126s 00:11:56.976 15:38:45 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.976 ************************************ 00:11:56.976 END TEST nvme_sgl 00:11:56.976 ************************************ 00:11:56.976 15:38:45 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:56.976 15:38:45 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:56.976 15:38:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.976 15:38:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.976 15:38:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.976 ************************************ 00:11:56.976 START TEST nvme_e2edp 00:11:56.976 ************************************ 00:11:56.976 15:38:45 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:57.234 NVMe Write/Read with End-to-End data protection test 00:11:57.234 Attached to 0000:00:13.0 00:11:57.234 Attached to 0000:00:10.0 00:11:57.234 Attached to 0000:00:11.0 00:11:57.234 Attached to 0000:00:12.0 00:11:57.234 Cleaning up... 00:11:57.492 00:11:57.492 real 0m0.283s 00:11:57.492 user 0m0.107s 00:11:57.492 sys 0m0.134s 00:11:57.492 ************************************ 00:11:57.492 END TEST nvme_e2edp 00:11:57.492 ************************************ 00:11:57.492 15:38:45 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.492 15:38:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:57.492 15:38:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:57.492 15:38:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.492 15:38:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.492 15:38:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.492 ************************************ 00:11:57.492 START TEST nvme_reserve 00:11:57.492 ************************************ 00:11:57.492 15:38:45 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:57.750 ===================================================== 00:11:57.750 NVMe Controller at PCI bus 0, device 19, function 0 00:11:57.750 ===================================================== 00:11:57.750 Reservations: Not Supported 00:11:57.750 ===================================================== 00:11:57.750 NVMe Controller at PCI bus 0, device 16, function 0 00:11:57.750 ===================================================== 00:11:57.750 Reservations: Not Supported 00:11:57.750 ===================================================== 00:11:57.750 NVMe Controller at PCI bus 0, device 17, function 0 00:11:57.750 ===================================================== 00:11:57.750 Reservations: Not Supported 00:11:57.750 ===================================================== 00:11:57.750 NVMe Controller at PCI bus 0, device 18, function 0 00:11:57.750 ===================================================== 00:11:57.750 Reservations: Not Supported 00:11:57.750 Reservation test passed 00:11:57.750 ************************************ 00:11:57.750 END TEST nvme_reserve 00:11:57.750 ************************************ 00:11:57.750 00:11:57.750 real 0m0.291s 00:11:57.750 user 0m0.104s 00:11:57.750 sys 0m0.142s 00:11:57.750 15:38:46 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.750 15:38:46 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:57.750 15:38:46 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:57.750 15:38:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.751 15:38:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.751 15:38:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.751 ************************************ 00:11:57.751 START TEST nvme_err_injection 00:11:57.751 ************************************ 00:11:57.751 15:38:46 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:58.009 NVMe Error Injection test 00:11:58.009 Attached to 0000:00:13.0 00:11:58.009 Attached to 0000:00:10.0 00:11:58.009 Attached to 0000:00:11.0 00:11:58.009 Attached to 0000:00:12.0 00:11:58.009 0000:00:10.0: get features failed as expected 00:11:58.009 0000:00:11.0: get features failed as expected 00:11:58.009 0000:00:12.0: get features failed as expected 00:11:58.009 0000:00:13.0: get features failed as expected 00:11:58.009 0000:00:13.0: get features successfully as expected 00:11:58.009 0000:00:10.0: get features successfully as expected 00:11:58.009 0000:00:11.0: get features successfully as expected 00:11:58.009 0000:00:12.0: get features successfully as expected 00:11:58.009 0000:00:13.0: read failed as expected 00:11:58.009 0000:00:10.0: read failed as expected 00:11:58.009 0000:00:11.0: read failed as expected 00:11:58.009 0000:00:12.0: read failed as expected 00:11:58.009 0000:00:13.0: read successfully as expected 00:11:58.009 0000:00:10.0: read successfully as expected 00:11:58.009 0000:00:11.0: read successfully as expected 00:11:58.009 0000:00:12.0: read successfully as expected 00:11:58.009 Cleaning up... 00:11:58.009 00:11:58.009 real 0m0.295s 00:11:58.009 user 0m0.129s 00:11:58.009 sys 0m0.121s 00:11:58.009 ************************************ 00:11:58.009 END TEST nvme_err_injection 00:11:58.009 ************************************ 00:11:58.009 15:38:46 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.009 15:38:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 15:38:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:58.009 15:38:46 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:58.009 15:38:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.009 15:38:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 ************************************ 00:11:58.009 START TEST nvme_overhead 00:11:58.009 ************************************ 00:11:58.009 15:38:46 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:59.386 Initializing NVMe Controllers 00:11:59.386 Attached to 0000:00:13.0 00:11:59.386 Attached to 0000:00:10.0 00:11:59.386 Attached to 0000:00:11.0 00:11:59.386 Attached to 0000:00:12.0 00:11:59.386 Initialization complete. Launching workers. 00:11:59.386 submit (in ns) avg, min, max = 16564.5, 12979.1, 106832.7 00:11:59.386 complete (in ns) avg, min, max = 11172.6, 8426.4, 84741.8 00:11:59.386 00:11:59.386 Submit histogram 00:11:59.386 ================ 00:11:59.386 Range in us Cumulative Count 00:11:59.386 12.975 - 13.033: 0.0122% ( 1) 00:11:59.386 13.149 - 13.207: 0.0244% ( 1) 00:11:59.386 13.207 - 13.265: 0.0731% ( 4) 00:11:59.386 13.265 - 13.324: 0.1096% ( 3) 00:11:59.386 13.324 - 13.382: 0.1826% ( 6) 00:11:59.386 13.382 - 13.440: 0.2800% ( 8) 00:11:59.386 13.440 - 13.498: 0.4018% ( 10) 00:11:59.386 13.498 - 13.556: 0.5844% ( 15) 00:11:59.386 13.556 - 13.615: 0.7184% ( 11) 00:11:59.386 13.615 - 13.673: 0.8888% ( 14) 00:11:59.386 13.673 - 13.731: 1.2176% ( 27) 00:11:59.386 13.731 - 13.789: 1.4976% ( 23) 00:11:59.386 13.789 - 13.847: 1.8020% ( 25) 00:11:59.386 13.847 - 13.905: 2.1429% ( 28) 00:11:59.386 13.905 - 13.964: 2.6300% ( 40) 00:11:59.386 13.964 - 14.022: 3.3727% ( 61) 00:11:59.386 14.022 - 14.080: 4.5659% ( 98) 00:11:59.386 14.080 - 14.138: 6.6480% ( 171) 00:11:59.386 14.138 - 14.196: 9.2536% ( 214) 00:11:59.386 14.196 - 14.255: 12.8333% ( 294) 00:11:59.386 14.255 - 14.313: 17.2044% ( 359) 00:11:59.386 14.313 - 14.371: 22.0626% ( 399) 00:11:59.386 14.371 - 14.429: 27.8704% ( 477) 00:11:59.386 14.429 - 14.487: 32.9478% ( 417) 00:11:59.386 14.487 - 14.545: 37.2824% ( 356) 00:11:59.386 14.545 - 14.604: 41.6535% ( 359) 00:11:59.386 14.604 - 14.662: 45.3184% ( 301) 00:11:59.386 14.662 - 14.720: 47.9484% ( 216) 00:11:59.386 14.720 - 14.778: 50.3592% ( 198) 00:11:59.386 14.778 - 14.836: 51.9420% ( 130) 00:11:59.386 14.836 - 14.895: 53.3910% ( 119) 00:11:59.386 14.895 - 15.011: 56.0088% ( 215) 00:11:59.386 15.011 - 15.127: 58.2369% ( 183) 00:11:59.386 15.127 - 15.244: 60.7208% ( 204) 00:11:59.386 15.244 - 15.360: 63.0464% ( 191) 00:11:59.386 15.360 - 15.476: 64.5562% ( 124) 00:11:59.386 15.476 - 15.593: 65.6277% ( 88) 00:11:59.386 15.593 - 15.709: 66.2608% ( 52) 00:11:59.386 15.709 - 15.825: 66.6748% ( 34) 00:11:59.386 15.825 - 15.942: 67.0279% ( 29) 00:11:59.386 15.942 - 16.058: 67.3079% ( 23) 00:11:59.386 16.058 - 16.175: 67.5514% ( 20) 00:11:59.386 16.175 - 16.291: 67.6854% ( 11) 00:11:59.386 16.291 - 16.407: 67.8680% ( 15) 00:11:59.386 16.407 - 16.524: 67.9411% ( 6) 00:11:59.386 16.524 - 16.640: 68.0994% ( 13) 00:11:59.386 16.640 - 16.756: 68.2333% ( 11) 00:11:59.386 16.756 - 16.873: 68.3307% ( 8) 00:11:59.386 16.873 - 16.989: 68.4159% ( 7) 00:11:59.386 16.989 - 17.105: 68.6351% ( 18) 00:11:59.386 17.105 - 17.222: 69.4509% ( 67) 00:11:59.386 17.222 - 17.338: 71.4112% ( 161) 00:11:59.386 17.338 - 17.455: 74.6743% ( 268) 00:11:59.386 17.455 - 17.571: 77.3895% ( 223) 00:11:59.386 17.571 - 17.687: 79.7760% ( 196) 00:11:59.386 17.687 - 17.804: 81.3588% ( 130) 00:11:59.386 17.804 - 17.920: 82.3085% ( 78) 00:11:59.386 17.920 - 18.036: 83.2461% ( 77) 00:11:59.386 18.036 - 18.153: 83.9888% ( 61) 00:11:59.386 18.153 - 18.269: 84.4637% ( 39) 00:11:59.386 18.269 - 18.385: 85.3525% ( 73) 00:11:59.386 18.385 - 18.502: 86.0587% ( 58) 00:11:59.386 18.502 - 18.618: 86.7162% ( 54) 00:11:59.386 18.618 - 18.735: 87.2884% ( 47) 00:11:59.386 18.735 - 18.851: 87.6537% ( 30) 00:11:59.386 18.851 - 18.967: 88.0190% ( 30) 00:11:59.386 18.967 - 19.084: 88.3964% ( 31) 00:11:59.386 19.084 - 19.200: 88.6400% ( 20) 00:11:59.386 19.200 - 19.316: 88.8713% ( 19) 00:11:59.386 19.316 - 19.433: 88.9444% ( 6) 00:11:59.386 19.433 - 19.549: 88.9931% ( 4) 00:11:59.386 19.549 - 19.665: 89.2244% ( 19) 00:11:59.386 19.665 - 19.782: 89.3949% ( 14) 00:11:59.386 19.782 - 19.898: 89.4923% ( 8) 00:11:59.386 19.898 - 20.015: 89.5775% ( 7) 00:11:59.386 20.015 - 20.131: 89.6871% ( 9) 00:11:59.386 20.131 - 20.247: 89.8210% ( 11) 00:11:59.386 20.247 - 20.364: 89.9549% ( 11) 00:11:59.386 20.364 - 20.480: 90.1254% ( 14) 00:11:59.386 20.480 - 20.596: 90.3080% ( 15) 00:11:59.386 20.596 - 20.713: 90.5394% ( 19) 00:11:59.386 20.713 - 20.829: 90.7220% ( 15) 00:11:59.386 20.829 - 20.945: 90.9777% ( 21) 00:11:59.386 20.945 - 21.062: 91.1604% ( 15) 00:11:59.386 21.062 - 21.178: 91.3430% ( 15) 00:11:59.386 21.178 - 21.295: 91.6352% ( 24) 00:11:59.387 21.295 - 21.411: 91.8909% ( 21) 00:11:59.387 21.411 - 21.527: 92.0614% ( 14) 00:11:59.387 21.527 - 21.644: 92.1953% ( 11) 00:11:59.387 21.644 - 21.760: 92.3414% ( 12) 00:11:59.387 21.760 - 21.876: 92.4145% ( 6) 00:11:59.387 21.876 - 21.993: 92.4632% ( 4) 00:11:59.387 21.993 - 22.109: 92.5606% ( 8) 00:11:59.387 22.109 - 22.225: 92.6823% ( 10) 00:11:59.387 22.225 - 22.342: 92.7554% ( 6) 00:11:59.387 22.342 - 22.458: 92.8893% ( 11) 00:11:59.387 22.458 - 22.575: 93.0111% ( 10) 00:11:59.387 22.575 - 22.691: 93.1450% ( 11) 00:11:59.387 22.691 - 22.807: 93.2302% ( 7) 00:11:59.387 22.807 - 22.924: 93.3277% ( 8) 00:11:59.387 22.924 - 23.040: 93.4494% ( 10) 00:11:59.387 23.040 - 23.156: 93.5833% ( 11) 00:11:59.387 23.156 - 23.273: 93.7173% ( 11) 00:11:59.387 23.273 - 23.389: 93.7782% ( 5) 00:11:59.387 23.389 - 23.505: 93.8269% ( 4) 00:11:59.387 23.505 - 23.622: 93.8877% ( 5) 00:11:59.387 23.622 - 23.738: 93.9364% ( 4) 00:11:59.387 23.738 - 23.855: 94.0095% ( 6) 00:11:59.387 23.855 - 23.971: 94.0582% ( 4) 00:11:59.387 23.971 - 24.087: 94.0947% ( 3) 00:11:59.387 24.087 - 24.204: 94.1434% ( 4) 00:11:59.387 24.204 - 24.320: 94.2043% ( 5) 00:11:59.387 24.320 - 24.436: 94.2652% ( 5) 00:11:59.387 24.436 - 24.553: 94.3504% ( 7) 00:11:59.387 24.553 - 24.669: 94.4113% ( 5) 00:11:59.387 24.669 - 24.785: 94.4965% ( 7) 00:11:59.387 24.785 - 24.902: 94.6061% ( 9) 00:11:59.387 24.902 - 25.018: 94.7157% ( 9) 00:11:59.387 25.018 - 25.135: 94.7522% ( 3) 00:11:59.387 25.135 - 25.251: 94.8375% ( 7) 00:11:59.387 25.251 - 25.367: 94.8862% ( 4) 00:11:59.387 25.367 - 25.484: 95.0201% ( 11) 00:11:59.387 25.484 - 25.600: 95.1540% ( 11) 00:11:59.387 25.600 - 25.716: 95.2027% ( 4) 00:11:59.387 25.716 - 25.833: 95.2271% ( 2) 00:11:59.387 25.833 - 25.949: 95.3001% ( 6) 00:11:59.387 26.065 - 26.182: 95.3367% ( 3) 00:11:59.387 26.182 - 26.298: 95.3732% ( 3) 00:11:59.387 26.298 - 26.415: 95.4219% ( 4) 00:11:59.387 26.415 - 26.531: 95.4341% ( 1) 00:11:59.387 26.531 - 26.647: 95.4706% ( 3) 00:11:59.387 26.647 - 26.764: 95.4828% ( 1) 00:11:59.387 26.764 - 26.880: 95.5193% ( 3) 00:11:59.387 26.880 - 26.996: 95.5680% ( 4) 00:11:59.387 26.996 - 27.113: 95.6045% ( 3) 00:11:59.387 27.113 - 27.229: 95.6654% ( 5) 00:11:59.387 27.229 - 27.345: 95.7019% ( 3) 00:11:59.387 27.345 - 27.462: 95.7385% ( 3) 00:11:59.387 27.462 - 27.578: 95.7628% ( 2) 00:11:59.387 27.578 - 27.695: 95.7750% ( 1) 00:11:59.387 27.695 - 27.811: 95.8237% ( 4) 00:11:59.387 27.811 - 27.927: 95.8846% ( 5) 00:11:59.387 27.927 - 28.044: 95.9455% ( 5) 00:11:59.387 28.044 - 28.160: 95.9942% ( 4) 00:11:59.387 28.160 - 28.276: 96.0550% ( 5) 00:11:59.387 28.276 - 28.393: 96.1281% ( 6) 00:11:59.387 28.393 - 28.509: 96.2620% ( 11) 00:11:59.387 28.509 - 28.625: 96.3473% ( 7) 00:11:59.387 28.625 - 28.742: 96.4812% ( 11) 00:11:59.387 28.742 - 28.858: 96.6151% ( 11) 00:11:59.387 28.858 - 28.975: 96.7247% ( 9) 00:11:59.387 28.975 - 29.091: 96.7978% ( 6) 00:11:59.387 29.091 - 29.207: 96.9926% ( 16) 00:11:59.387 29.207 - 29.324: 97.1265% ( 11) 00:11:59.387 29.324 - 29.440: 97.2117% ( 7) 00:11:59.387 29.440 - 29.556: 97.3091% ( 8) 00:11:59.387 29.556 - 29.673: 97.4066% ( 8) 00:11:59.387 29.673 - 29.789: 97.5161% ( 9) 00:11:59.387 29.789 - 30.022: 97.8571% ( 28) 00:11:59.387 30.022 - 30.255: 98.0153% ( 13) 00:11:59.387 30.255 - 30.487: 98.1858% ( 14) 00:11:59.387 30.487 - 30.720: 98.3076% ( 10) 00:11:59.387 30.720 - 30.953: 98.4658% ( 13) 00:11:59.387 30.953 - 31.185: 98.5389% ( 6) 00:11:59.387 31.185 - 31.418: 98.5998% ( 5) 00:11:59.387 31.418 - 31.651: 98.6241% ( 2) 00:11:59.387 31.651 - 31.884: 98.7215% ( 8) 00:11:59.387 31.884 - 32.116: 98.7702% ( 4) 00:11:59.387 32.116 - 32.349: 98.8189% ( 4) 00:11:59.387 32.349 - 32.582: 98.8555% ( 3) 00:11:59.387 32.582 - 32.815: 98.8798% ( 2) 00:11:59.387 32.815 - 33.047: 98.8920% ( 1) 00:11:59.387 33.047 - 33.280: 98.9285% ( 3) 00:11:59.387 33.280 - 33.513: 98.9407% ( 1) 00:11:59.387 33.513 - 33.745: 98.9651% ( 2) 00:11:59.387 33.745 - 33.978: 98.9894% ( 2) 00:11:59.387 33.978 - 34.211: 99.0138% ( 2) 00:11:59.387 34.211 - 34.444: 99.0381% ( 2) 00:11:59.387 34.444 - 34.676: 99.0503% ( 1) 00:11:59.387 34.676 - 34.909: 99.0625% ( 1) 00:11:59.387 34.909 - 35.142: 99.0746% ( 1) 00:11:59.387 35.142 - 35.375: 99.1720% ( 8) 00:11:59.387 35.375 - 35.607: 99.1842% ( 1) 00:11:59.387 35.607 - 35.840: 99.1964% ( 1) 00:11:59.387 35.840 - 36.073: 99.2086% ( 1) 00:11:59.387 36.073 - 36.305: 99.2451% ( 3) 00:11:59.387 36.305 - 36.538: 99.2573% ( 1) 00:11:59.387 36.538 - 36.771: 99.2938% ( 3) 00:11:59.387 36.771 - 37.004: 99.3425% ( 4) 00:11:59.387 37.004 - 37.236: 99.3669% ( 2) 00:11:59.387 37.236 - 37.469: 99.3790% ( 1) 00:11:59.387 37.469 - 37.702: 99.4156% ( 3) 00:11:59.387 37.702 - 37.935: 99.4277% ( 1) 00:11:59.387 37.935 - 38.167: 99.4643% ( 3) 00:11:59.387 38.167 - 38.400: 99.5008% ( 3) 00:11:59.387 38.400 - 38.633: 99.5373% ( 3) 00:11:59.387 38.633 - 38.865: 99.5495% ( 1) 00:11:59.387 39.796 - 40.029: 99.5617% ( 1) 00:11:59.387 40.262 - 40.495: 99.5982% ( 3) 00:11:59.387 40.727 - 40.960: 99.6104% ( 1) 00:11:59.387 40.960 - 41.193: 99.6225% ( 1) 00:11:59.387 41.425 - 41.658: 99.6469% ( 2) 00:11:59.387 42.124 - 42.356: 99.6591% ( 1) 00:11:59.387 43.287 - 43.520: 99.6834% ( 2) 00:11:59.387 43.520 - 43.753: 99.6956% ( 1) 00:11:59.387 43.985 - 44.218: 99.7200% ( 2) 00:11:59.387 44.451 - 44.684: 99.7808% ( 5) 00:11:59.387 44.684 - 44.916: 99.8052% ( 2) 00:11:59.387 44.916 - 45.149: 99.8174% ( 1) 00:11:59.387 45.149 - 45.382: 99.8295% ( 1) 00:11:59.387 45.382 - 45.615: 99.8539% ( 2) 00:11:59.387 45.615 - 45.847: 99.8782% ( 2) 00:11:59.387 45.847 - 46.080: 99.8904% ( 1) 00:11:59.387 46.080 - 46.313: 99.9026% ( 1) 00:11:59.387 47.709 - 47.942: 99.9148% ( 1) 00:11:59.387 48.175 - 48.407: 99.9269% ( 1) 00:11:59.387 51.433 - 51.665: 99.9391% ( 1) 00:11:59.387 53.760 - 53.993: 99.9513% ( 1) 00:11:59.387 56.553 - 56.785: 99.9635% ( 1) 00:11:59.387 58.880 - 59.113: 99.9756% ( 1) 00:11:59.387 78.196 - 78.662: 99.9878% ( 1) 00:11:59.387 106.589 - 107.055: 100.0000% ( 1) 00:11:59.387 00:11:59.387 Complete histogram 00:11:59.387 ================== 00:11:59.387 Range in us Cumulative Count 00:11:59.387 8.378 - 8.436: 0.0122% ( 1) 00:11:59.387 8.495 - 8.553: 0.0365% ( 2) 00:11:59.387 8.553 - 8.611: 0.0731% ( 3) 00:11:59.387 8.669 - 8.727: 0.1218% ( 4) 00:11:59.387 8.727 - 8.785: 0.2557% ( 11) 00:11:59.387 8.785 - 8.844: 0.3653% ( 9) 00:11:59.387 8.844 - 8.902: 0.4749% ( 9) 00:11:59.387 8.902 - 8.960: 0.6453% ( 14) 00:11:59.387 8.960 - 9.018: 0.8158% ( 14) 00:11:59.387 9.018 - 9.076: 1.0106% ( 16) 00:11:59.387 9.076 - 9.135: 1.2785% ( 22) 00:11:59.387 9.135 - 9.193: 1.8142% ( 44) 00:11:59.387 9.193 - 9.251: 2.9831% ( 96) 00:11:59.387 9.251 - 9.309: 5.6861% ( 222) 00:11:59.387 9.309 - 9.367: 10.6417% ( 407) 00:11:59.387 9.367 - 9.425: 18.3733% ( 635) 00:11:59.387 9.425 - 9.484: 27.8096% ( 775) 00:11:59.387 9.484 - 9.542: 37.0267% ( 757) 00:11:59.387 9.542 - 9.600: 44.5391% ( 617) 00:11:59.387 9.600 - 9.658: 49.1051% ( 375) 00:11:59.387 9.658 - 9.716: 52.0516% ( 242) 00:11:59.387 9.716 - 9.775: 53.7441% ( 139) 00:11:59.387 9.775 - 9.833: 55.3634% ( 133) 00:11:59.387 9.833 - 9.891: 56.4836% ( 92) 00:11:59.387 9.891 - 9.949: 57.2994% ( 67) 00:11:59.387 9.949 - 10.007: 58.0178% ( 59) 00:11:59.387 10.007 - 10.065: 58.3952% ( 31) 00:11:59.387 10.065 - 10.124: 58.8214% ( 35) 00:11:59.387 10.124 - 10.182: 59.1745% ( 29) 00:11:59.387 10.182 - 10.240: 59.6006% ( 35) 00:11:59.387 10.240 - 10.298: 59.7711% ( 14) 00:11:59.387 10.298 - 10.356: 60.0633% ( 24) 00:11:59.387 10.356 - 10.415: 60.3190% ( 21) 00:11:59.387 10.415 - 10.473: 60.8060% ( 40) 00:11:59.387 10.473 - 10.531: 61.4757% ( 55) 00:11:59.387 10.531 - 10.589: 62.1089% ( 52) 00:11:59.387 10.589 - 10.647: 63.0951% ( 81) 00:11:59.387 10.647 - 10.705: 63.8987% ( 66) 00:11:59.387 10.705 - 10.764: 64.7267% ( 68) 00:11:59.387 10.764 - 10.822: 65.4207% ( 57) 00:11:59.387 10.822 - 10.880: 65.8834% ( 38) 00:11:59.387 10.880 - 10.938: 66.3217% ( 36) 00:11:59.387 10.938 - 10.996: 66.5652% ( 20) 00:11:59.387 10.996 - 11.055: 66.6991% ( 11) 00:11:59.387 11.055 - 11.113: 66.7965% ( 8) 00:11:59.387 11.113 - 11.171: 66.9792% ( 15) 00:11:59.387 11.171 - 11.229: 67.1131% ( 11) 00:11:59.388 11.229 - 11.287: 67.2227% ( 9) 00:11:59.388 11.287 - 11.345: 67.4419% ( 18) 00:11:59.388 11.345 - 11.404: 67.6488% ( 17) 00:11:59.388 11.404 - 11.462: 67.8802% ( 19) 00:11:59.388 11.462 - 11.520: 68.1602% ( 23) 00:11:59.388 11.520 - 11.578: 68.7325% ( 47) 00:11:59.388 11.578 - 11.636: 70.1814% ( 119) 00:11:59.388 11.636 - 11.695: 72.2513% ( 170) 00:11:59.388 11.695 - 11.753: 75.3440% ( 254) 00:11:59.388 11.753 - 11.811: 78.5340% ( 262) 00:11:59.388 11.811 - 11.869: 81.2614% ( 224) 00:11:59.388 11.869 - 11.927: 83.1243% ( 153) 00:11:59.388 11.927 - 11.985: 84.4028% ( 105) 00:11:59.388 11.985 - 12.044: 85.3647% ( 79) 00:11:59.388 12.044 - 12.102: 85.8152% ( 37) 00:11:59.388 12.102 - 12.160: 86.0465% ( 19) 00:11:59.388 12.160 - 12.218: 86.3144% ( 22) 00:11:59.388 12.218 - 12.276: 86.5214% ( 17) 00:11:59.388 12.276 - 12.335: 86.8014% ( 23) 00:11:59.388 12.335 - 12.393: 86.9110% ( 9) 00:11:59.388 12.393 - 12.451: 87.2397% ( 27) 00:11:59.388 12.451 - 12.509: 87.4833% ( 20) 00:11:59.388 12.509 - 12.567: 87.6781% ( 16) 00:11:59.388 12.567 - 12.625: 87.8851% ( 17) 00:11:59.388 12.625 - 12.684: 87.9703% ( 7) 00:11:59.388 12.684 - 12.742: 88.0433% ( 6) 00:11:59.388 12.742 - 12.800: 88.1529% ( 9) 00:11:59.388 12.800 - 12.858: 88.3112% ( 13) 00:11:59.388 12.858 - 12.916: 88.3964% ( 7) 00:11:59.388 12.916 - 12.975: 88.5182% ( 10) 00:11:59.388 12.975 - 13.033: 88.7252% ( 17) 00:11:59.388 13.033 - 13.091: 89.0174% ( 24) 00:11:59.388 13.091 - 13.149: 89.2975% ( 23) 00:11:59.388 13.149 - 13.207: 89.4801% ( 15) 00:11:59.388 13.207 - 13.265: 89.8454% ( 30) 00:11:59.388 13.265 - 13.324: 90.1376% ( 24) 00:11:59.388 13.324 - 13.382: 90.3933% ( 21) 00:11:59.388 13.382 - 13.440: 90.5637% ( 14) 00:11:59.388 13.440 - 13.498: 90.6611% ( 8) 00:11:59.388 13.498 - 13.556: 90.7342% ( 6) 00:11:59.388 13.556 - 13.615: 90.8316% ( 8) 00:11:59.388 13.615 - 13.673: 90.9412% ( 9) 00:11:59.388 13.673 - 13.731: 91.0386% ( 8) 00:11:59.388 13.731 - 13.789: 91.0995% ( 5) 00:11:59.388 13.789 - 13.847: 91.2456% ( 12) 00:11:59.388 13.847 - 13.905: 91.3430% ( 8) 00:11:59.388 13.905 - 13.964: 91.4404% ( 8) 00:11:59.388 13.964 - 14.022: 91.5743% ( 11) 00:11:59.388 14.022 - 14.080: 91.6839% ( 9) 00:11:59.388 14.080 - 14.138: 91.7691% ( 7) 00:11:59.388 14.138 - 14.196: 91.9518% ( 15) 00:11:59.388 14.196 - 14.255: 92.0248% ( 6) 00:11:59.388 14.255 - 14.313: 92.1588% ( 11) 00:11:59.388 14.313 - 14.371: 92.2562% ( 8) 00:11:59.388 14.371 - 14.429: 92.3414% ( 7) 00:11:59.388 14.429 - 14.487: 92.4753% ( 11) 00:11:59.388 14.487 - 14.545: 92.6215% ( 12) 00:11:59.388 14.545 - 14.604: 92.6823% ( 5) 00:11:59.388 14.604 - 14.662: 92.7554% ( 6) 00:11:59.388 14.662 - 14.720: 92.8041% ( 4) 00:11:59.388 14.720 - 14.778: 92.8771% ( 6) 00:11:59.388 14.778 - 14.836: 92.9380% ( 5) 00:11:59.388 14.836 - 14.895: 93.0354% ( 8) 00:11:59.388 14.895 - 15.011: 93.0963% ( 5) 00:11:59.388 15.011 - 15.127: 93.2181% ( 10) 00:11:59.388 15.127 - 15.244: 93.3398% ( 10) 00:11:59.388 15.244 - 15.360: 93.4372% ( 8) 00:11:59.388 15.360 - 15.476: 93.5103% ( 6) 00:11:59.388 15.476 - 15.593: 93.5468% ( 3) 00:11:59.388 15.593 - 15.709: 93.6077% ( 5) 00:11:59.388 15.709 - 15.825: 93.7051% ( 8) 00:11:59.388 15.825 - 15.942: 93.7660% ( 5) 00:11:59.388 15.942 - 16.058: 93.8390% ( 6) 00:11:59.388 16.058 - 16.175: 93.8877% ( 4) 00:11:59.388 16.175 - 16.291: 93.9121% ( 2) 00:11:59.388 16.291 - 16.407: 93.9730% ( 5) 00:11:59.388 16.407 - 16.524: 94.0704% ( 8) 00:11:59.388 16.524 - 16.640: 94.1678% ( 8) 00:11:59.388 16.640 - 16.756: 94.2043% ( 3) 00:11:59.388 16.756 - 16.873: 94.2895% ( 7) 00:11:59.388 16.873 - 16.989: 94.3504% ( 5) 00:11:59.388 16.989 - 17.105: 94.4113% ( 5) 00:11:59.388 17.105 - 17.222: 94.4722% ( 5) 00:11:59.388 17.222 - 17.338: 94.5209% ( 4) 00:11:59.388 17.338 - 17.455: 94.5696% ( 4) 00:11:59.388 17.455 - 17.571: 94.6305% ( 5) 00:11:59.388 17.571 - 17.687: 94.8009% ( 14) 00:11:59.388 17.687 - 17.804: 94.8983% ( 8) 00:11:59.388 17.804 - 17.920: 95.0079% ( 9) 00:11:59.388 17.920 - 18.036: 95.1175% ( 9) 00:11:59.388 18.036 - 18.153: 95.1906% ( 6) 00:11:59.388 18.153 - 18.269: 95.3123% ( 10) 00:11:59.388 18.269 - 18.385: 95.3367% ( 2) 00:11:59.388 18.385 - 18.502: 95.3975% ( 5) 00:11:59.388 18.502 - 18.618: 95.4706% ( 6) 00:11:59.388 18.618 - 18.735: 95.5193% ( 4) 00:11:59.388 18.735 - 18.851: 95.6045% ( 7) 00:11:59.388 18.851 - 18.967: 95.6898% ( 7) 00:11:59.388 18.967 - 19.084: 95.7872% ( 8) 00:11:59.388 19.084 - 19.200: 95.8359% ( 4) 00:11:59.388 19.200 - 19.316: 95.9333% ( 8) 00:11:59.388 19.316 - 19.433: 95.9576% ( 2) 00:11:59.388 19.433 - 19.549: 95.9820% ( 2) 00:11:59.388 19.549 - 19.665: 96.0063% ( 2) 00:11:59.388 19.665 - 19.782: 96.0550% ( 4) 00:11:59.388 19.782 - 19.898: 96.0916% ( 3) 00:11:59.388 20.015 - 20.131: 96.1524% ( 5) 00:11:59.388 20.131 - 20.247: 96.1646% ( 1) 00:11:59.388 20.247 - 20.364: 96.1890% ( 2) 00:11:59.388 20.364 - 20.480: 96.2011% ( 1) 00:11:59.388 20.480 - 20.596: 96.2133% ( 1) 00:11:59.388 20.596 - 20.713: 96.2255% ( 1) 00:11:59.388 20.713 - 20.829: 96.2377% ( 1) 00:11:59.388 20.829 - 20.945: 96.2742% ( 3) 00:11:59.388 20.945 - 21.062: 96.2864% ( 1) 00:11:59.388 21.062 - 21.178: 96.3107% ( 2) 00:11:59.388 21.178 - 21.295: 96.3351% ( 2) 00:11:59.388 21.295 - 21.411: 96.3838% ( 4) 00:11:59.388 21.411 - 21.527: 96.4081% ( 2) 00:11:59.388 21.527 - 21.644: 96.4203% ( 1) 00:11:59.388 21.644 - 21.760: 96.4812% ( 5) 00:11:59.388 21.760 - 21.876: 96.5055% ( 2) 00:11:59.388 21.876 - 21.993: 96.5299% ( 2) 00:11:59.388 21.993 - 22.109: 96.5542% ( 2) 00:11:59.388 22.109 - 22.225: 96.5786% ( 2) 00:11:59.388 22.225 - 22.342: 96.5908% ( 1) 00:11:59.388 22.342 - 22.458: 96.6029% ( 1) 00:11:59.388 22.458 - 22.575: 96.6273% ( 2) 00:11:59.388 22.575 - 22.691: 96.6395% ( 1) 00:11:59.388 22.691 - 22.807: 96.6516% ( 1) 00:11:59.388 22.807 - 22.924: 96.6882% ( 3) 00:11:59.388 22.924 - 23.040: 96.7247% ( 3) 00:11:59.388 23.040 - 23.156: 96.7491% ( 2) 00:11:59.388 23.156 - 23.273: 96.8099% ( 5) 00:11:59.388 23.273 - 23.389: 96.8221% ( 1) 00:11:59.388 23.389 - 23.505: 96.8465% ( 2) 00:11:59.388 23.505 - 23.622: 96.9195% ( 6) 00:11:59.388 23.622 - 23.738: 96.9682% ( 4) 00:11:59.388 23.738 - 23.855: 97.1022% ( 11) 00:11:59.388 23.855 - 23.971: 97.4066% ( 25) 00:11:59.388 23.971 - 24.087: 97.5892% ( 15) 00:11:59.388 24.087 - 24.204: 97.8205% ( 19) 00:11:59.388 24.204 - 24.320: 97.9910% ( 14) 00:11:59.388 24.320 - 24.436: 98.1615% ( 14) 00:11:59.388 24.436 - 24.553: 98.2223% ( 5) 00:11:59.388 24.553 - 24.669: 98.3076% ( 7) 00:11:59.388 24.669 - 24.785: 98.3563% ( 4) 00:11:59.388 24.785 - 24.902: 98.4293% ( 6) 00:11:59.388 24.902 - 25.018: 98.4780% ( 4) 00:11:59.388 25.018 - 25.135: 98.6607% ( 15) 00:11:59.388 25.135 - 25.251: 98.7215% ( 5) 00:11:59.388 25.251 - 25.367: 98.7824% ( 5) 00:11:59.388 25.367 - 25.484: 98.8311% ( 4) 00:11:59.388 25.484 - 25.600: 98.9285% ( 8) 00:11:59.388 25.600 - 25.716: 99.0016% ( 6) 00:11:59.388 25.716 - 25.833: 99.0381% ( 3) 00:11:59.388 25.833 - 25.949: 99.0746% ( 3) 00:11:59.388 25.949 - 26.065: 99.1477% ( 6) 00:11:59.388 26.065 - 26.182: 99.1720% ( 2) 00:11:59.388 26.182 - 26.298: 99.2086% ( 3) 00:11:59.388 26.298 - 26.415: 99.2451% ( 3) 00:11:59.388 26.415 - 26.531: 99.2695% ( 2) 00:11:59.388 26.531 - 26.647: 99.2938% ( 2) 00:11:59.388 26.647 - 26.764: 99.3060% ( 1) 00:11:59.388 26.764 - 26.880: 99.3303% ( 2) 00:11:59.388 26.880 - 26.996: 99.3425% ( 1) 00:11:59.388 26.996 - 27.113: 99.3547% ( 1) 00:11:59.388 27.229 - 27.345: 99.3790% ( 2) 00:11:59.388 27.462 - 27.578: 99.3912% ( 1) 00:11:59.388 27.578 - 27.695: 99.4156% ( 2) 00:11:59.388 27.695 - 27.811: 99.4277% ( 1) 00:11:59.388 27.927 - 28.044: 99.4399% ( 1) 00:11:59.388 28.160 - 28.276: 99.4521% ( 1) 00:11:59.388 28.393 - 28.509: 99.4643% ( 1) 00:11:59.388 28.625 - 28.742: 99.5008% ( 3) 00:11:59.388 28.742 - 28.858: 99.5130% ( 1) 00:11:59.388 28.975 - 29.091: 99.5251% ( 1) 00:11:59.388 29.789 - 30.022: 99.5373% ( 1) 00:11:59.388 30.022 - 30.255: 99.5495% ( 1) 00:11:59.388 30.487 - 30.720: 99.5738% ( 2) 00:11:59.388 30.953 - 31.185: 99.5860% ( 1) 00:11:59.388 31.185 - 31.418: 99.5982% ( 1) 00:11:59.388 31.418 - 31.651: 99.6347% ( 3) 00:11:59.388 31.884 - 32.116: 99.6469% ( 1) 00:11:59.388 32.116 - 32.349: 99.6956% ( 4) 00:11:59.388 32.349 - 32.582: 99.7078% ( 1) 00:11:59.388 32.582 - 32.815: 99.7321% ( 2) 00:11:59.388 33.513 - 33.745: 99.7687% ( 3) 00:11:59.388 33.745 - 33.978: 99.7808% ( 1) 00:11:59.389 35.142 - 35.375: 99.7930% ( 1) 00:11:59.389 35.375 - 35.607: 99.8052% ( 1) 00:11:59.389 36.073 - 36.305: 99.8295% ( 2) 00:11:59.389 36.538 - 36.771: 99.8417% ( 1) 00:11:59.389 38.167 - 38.400: 99.8539% ( 1) 00:11:59.389 38.865 - 39.098: 99.8782% ( 2) 00:11:59.389 39.098 - 39.331: 99.8904% ( 1) 00:11:59.389 40.495 - 40.727: 99.9026% ( 1) 00:11:59.389 40.727 - 40.960: 99.9148% ( 1) 00:11:59.389 43.520 - 43.753: 99.9269% ( 1) 00:11:59.389 46.080 - 46.313: 99.9391% ( 1) 00:11:59.389 47.942 - 48.175: 99.9513% ( 1) 00:11:59.389 49.804 - 50.036: 99.9635% ( 1) 00:11:59.389 56.087 - 56.320: 99.9756% ( 1) 00:11:59.389 82.385 - 82.851: 99.9878% ( 1) 00:11:59.389 84.713 - 85.178: 100.0000% ( 1) 00:11:59.389 00:11:59.389 ************************************ 00:11:59.389 END TEST nvme_overhead 00:11:59.389 ************************************ 00:11:59.389 00:11:59.389 real 0m1.296s 00:11:59.389 user 0m1.097s 00:11:59.389 sys 0m0.144s 00:11:59.389 15:38:47 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.389 15:38:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:59.389 15:38:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:59.389 15:38:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:59.389 15:38:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.389 15:38:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.389 ************************************ 00:11:59.389 START TEST nvme_arbitration 00:11:59.389 ************************************ 00:11:59.389 15:38:48 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:02.675 Initializing NVMe Controllers 00:12:02.675 Attached to 0000:00:13.0 00:12:02.675 Attached to 0000:00:10.0 00:12:02.675 Attached to 0000:00:11.0 00:12:02.676 Attached to 0000:00:12.0 00:12:02.676 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:12:02.676 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:12:02.676 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:12:02.676 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:02.676 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:02.676 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:02.676 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:02.676 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:02.676 Initialization complete. Launching workers. 00:12:02.676 Starting thread on core 1 with urgent priority queue 00:12:02.676 Starting thread on core 2 with urgent priority queue 00:12:02.676 Starting thread on core 3 with urgent priority queue 00:12:02.676 Starting thread on core 0 with urgent priority queue 00:12:02.676 QEMU NVMe Ctrl (12343 ) core 0: 3200.00 IO/s 31.25 secs/100000 ios 00:12:02.676 QEMU NVMe Ctrl (12342 ) core 0: 3200.00 IO/s 31.25 secs/100000 ios 00:12:02.676 QEMU NVMe Ctrl (12340 ) core 1: 3072.00 IO/s 32.55 secs/100000 ios 00:12:02.676 QEMU NVMe Ctrl (12342 ) core 1: 3072.00 IO/s 32.55 secs/100000 ios 00:12:02.676 QEMU NVMe Ctrl (12341 ) core 2: 3626.67 IO/s 27.57 secs/100000 ios 00:12:02.676 QEMU NVMe Ctrl (12342 ) core 3: 3904.00 IO/s 25.61 secs/100000 ios 00:12:02.676 ======================================================== 00:12:02.676 00:12:02.676 ************************************ 00:12:02.676 END TEST nvme_arbitration 00:12:02.676 ************************************ 00:12:02.676 00:12:02.676 real 0m3.341s 00:12:02.676 user 0m9.062s 00:12:02.676 sys 0m0.198s 00:12:02.676 15:38:51 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.676 15:38:51 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:02.938 15:38:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:02.938 15:38:51 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.938 15:38:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.938 15:38:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.938 ************************************ 00:12:02.938 START TEST nvme_single_aen 00:12:02.938 ************************************ 00:12:02.938 15:38:51 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:03.196 Asynchronous Event Request test 00:12:03.197 Attached to 0000:00:13.0 00:12:03.197 Attached to 0000:00:10.0 00:12:03.197 Attached to 0000:00:11.0 00:12:03.197 Attached to 0000:00:12.0 00:12:03.197 Reset controller to setup AER completions for this process 00:12:03.197 Registering asynchronous event callbacks... 00:12:03.197 Getting orig temperature thresholds of all controllers 00:12:03.197 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:03.197 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:03.197 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:03.197 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:03.197 Setting all controllers temperature threshold low to trigger AER 00:12:03.197 Waiting for all controllers temperature threshold to be set lower 00:12:03.197 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:03.197 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:03.197 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:03.197 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:03.197 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:03.197 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:03.197 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:03.197 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:03.197 Waiting for all controllers to trigger AER and reset threshold 00:12:03.197 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:03.197 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:03.197 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:03.197 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:03.197 Cleaning up... 00:12:03.197 00:12:03.197 real 0m0.301s 00:12:03.197 user 0m0.105s 00:12:03.197 sys 0m0.149s 00:12:03.197 ************************************ 00:12:03.197 END TEST nvme_single_aen 00:12:03.197 ************************************ 00:12:03.197 15:38:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.197 15:38:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:03.197 15:38:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:03.197 15:38:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.197 15:38:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.197 15:38:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.197 ************************************ 00:12:03.197 START TEST nvme_doorbell_aers 00:12:03.197 ************************************ 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:03.197 15:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:03.455 [2024-12-06 15:38:52.127814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:13.443 Executing: test_write_invalid_db 00:12:13.443 Waiting for AER completion... 00:12:13.443 Failure: test_write_invalid_db 00:12:13.443 00:12:13.443 Executing: test_invalid_db_write_overflow_sq 00:12:13.443 Waiting for AER completion... 00:12:13.443 Failure: test_invalid_db_write_overflow_sq 00:12:13.443 00:12:13.443 Executing: test_invalid_db_write_overflow_cq 00:12:13.443 Waiting for AER completion... 00:12:13.443 Failure: test_invalid_db_write_overflow_cq 00:12:13.443 00:12:13.443 15:39:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:13.443 15:39:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:13.703 [2024-12-06 15:39:02.169811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:23.710 Executing: test_write_invalid_db 00:12:23.710 Waiting for AER completion... 00:12:23.710 Failure: test_write_invalid_db 00:12:23.710 00:12:23.710 Executing: test_invalid_db_write_overflow_sq 00:12:23.710 Waiting for AER completion... 00:12:23.710 Failure: test_invalid_db_write_overflow_sq 00:12:23.710 00:12:23.710 Executing: test_invalid_db_write_overflow_cq 00:12:23.710 Waiting for AER completion... 00:12:23.710 Failure: test_invalid_db_write_overflow_cq 00:12:23.710 00:12:23.710 15:39:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:23.710 15:39:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:23.710 [2024-12-06 15:39:12.250005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:33.735 Executing: test_write_invalid_db 00:12:33.735 Waiting for AER completion... 00:12:33.735 Failure: test_write_invalid_db 00:12:33.735 00:12:33.735 Executing: test_invalid_db_write_overflow_sq 00:12:33.735 Waiting for AER completion... 00:12:33.735 Failure: test_invalid_db_write_overflow_sq 00:12:33.735 00:12:33.735 Executing: test_invalid_db_write_overflow_cq 00:12:33.735 Waiting for AER completion... 00:12:33.735 Failure: test_invalid_db_write_overflow_cq 00:12:33.735 00:12:33.735 15:39:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:33.735 15:39:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:33.735 [2024-12-06 15:39:22.276642] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 Executing: test_write_invalid_db 00:12:43.714 Waiting for AER completion... 00:12:43.714 Failure: test_write_invalid_db 00:12:43.714 00:12:43.714 Executing: test_invalid_db_write_overflow_sq 00:12:43.714 Waiting for AER completion... 00:12:43.714 Failure: test_invalid_db_write_overflow_sq 00:12:43.714 00:12:43.714 Executing: test_invalid_db_write_overflow_cq 00:12:43.714 Waiting for AER completion... 00:12:43.714 Failure: test_invalid_db_write_overflow_cq 00:12:43.714 00:12:43.714 00:12:43.714 real 0m40.272s 00:12:43.714 user 0m34.338s 00:12:43.714 sys 0m5.534s 00:12:43.714 15:39:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.714 15:39:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:43.714 ************************************ 00:12:43.714 END TEST nvme_doorbell_aers 00:12:43.714 ************************************ 00:12:43.714 15:39:32 nvme -- nvme/nvme.sh@97 -- # uname 00:12:43.714 15:39:32 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:43.714 15:39:32 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:43.714 15:39:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:43.714 15:39:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.714 15:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.714 ************************************ 00:12:43.714 START TEST nvme_multi_aen 00:12:43.714 ************************************ 00:12:43.714 15:39:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:43.714 [2024-12-06 15:39:32.343422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.343546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.343593] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.346077] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.346143] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.346167] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.348032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.348096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.348119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.349859] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.350152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 [2024-12-06 15:39:32.350183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76200) is not found. Dropping the request. 00:12:43.714 Child process pid: 76721 00:12:43.972 [Child] Asynchronous Event Request test 00:12:43.972 [Child] Attached to 0000:00:13.0 00:12:43.972 [Child] Attached to 0000:00:10.0 00:12:43.972 [Child] Attached to 0000:00:11.0 00:12:43.972 [Child] Attached to 0000:00:12.0 00:12:43.972 [Child] Registering asynchronous event callbacks... 00:12:43.972 [Child] Getting orig temperature thresholds of all controllers 00:12:43.972 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.972 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.972 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.972 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.972 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:43.973 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 [Child] Cleaning up... 00:12:43.973 Asynchronous Event Request test 00:12:43.973 Attached to 0000:00:13.0 00:12:43.973 Attached to 0000:00:10.0 00:12:43.973 Attached to 0000:00:11.0 00:12:43.973 Attached to 0000:00:12.0 00:12:43.973 Reset controller to setup AER completions for this process 00:12:43.973 Registering asynchronous event callbacks... 00:12:43.973 Getting orig temperature thresholds of all controllers 00:12:43.973 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.973 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.973 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.973 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:43.973 Setting all controllers temperature threshold low to trigger AER 00:12:43.973 Waiting for all controllers temperature threshold to be set lower 00:12:43.973 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:43.973 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:43.973 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:43.973 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:43.973 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:43.973 Waiting for all controllers to trigger AER and reset threshold 00:12:43.973 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:43.973 Cleaning up... 00:12:44.231 ************************************ 00:12:44.231 END TEST nvme_multi_aen 00:12:44.231 ************************************ 00:12:44.231 00:12:44.231 real 0m0.557s 00:12:44.231 user 0m0.204s 00:12:44.231 sys 0m0.248s 00:12:44.231 15:39:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.231 15:39:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 15:39:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:44.231 15:39:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:44.231 15:39:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.231 15:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 ************************************ 00:12:44.231 START TEST nvme_startup 00:12:44.231 ************************************ 00:12:44.231 15:39:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:44.490 Initializing NVMe Controllers 00:12:44.490 Attached to 0000:00:13.0 00:12:44.490 Attached to 0000:00:10.0 00:12:44.490 Attached to 0000:00:11.0 00:12:44.491 Attached to 0000:00:12.0 00:12:44.491 Initialization complete. 00:12:44.491 Time used:191083.250 (us). 00:12:44.491 ************************************ 00:12:44.491 END TEST nvme_startup 00:12:44.491 ************************************ 00:12:44.491 00:12:44.491 real 0m0.286s 00:12:44.491 user 0m0.107s 00:12:44.491 sys 0m0.128s 00:12:44.491 15:39:33 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.491 15:39:33 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:44.491 15:39:33 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:44.491 15:39:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.491 15:39:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.491 15:39:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.491 ************************************ 00:12:44.491 START TEST nvme_multi_secondary 00:12:44.491 ************************************ 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76771 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76772 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:44.491 15:39:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:48.680 Initializing NVMe Controllers 00:12:48.680 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:48.680 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:48.680 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:48.680 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:48.680 Initialization complete. Launching workers. 00:12:48.680 ======================================================== 00:12:48.680 Latency(us) 00:12:48.680 Device Information : IOPS MiB/s Average min max 00:12:48.680 PCIE (0000:00:13.0) NSID 1 from core 2: 2060.81 8.05 7762.84 2281.89 17309.65 00:12:48.680 PCIE (0000:00:10.0) NSID 1 from core 2: 2060.81 8.05 7754.81 1981.05 17949.02 00:12:48.680 PCIE (0000:00:11.0) NSID 1 from core 2: 2060.81 8.05 7757.03 2012.81 16703.86 00:12:48.680 PCIE (0000:00:12.0) NSID 1 from core 2: 2060.81 8.05 7756.95 2096.43 15031.85 00:12:48.680 PCIE (0000:00:12.0) NSID 2 from core 2: 2060.81 8.05 7757.48 2100.96 14764.44 00:12:48.680 PCIE (0000:00:12.0) NSID 3 from core 2: 2060.81 8.05 7757.31 2055.33 17534.78 00:12:48.680 ======================================================== 00:12:48.680 Total : 12364.84 48.30 7757.74 1981.05 17949.02 00:12:48.680 00:12:48.680 Initializing NVMe Controllers 00:12:48.680 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:48.680 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:48.680 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:48.680 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:48.680 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:48.680 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:48.680 Initialization complete. Launching workers. 00:12:48.680 ======================================================== 00:12:48.680 Latency(us) 00:12:48.680 Device Information : IOPS MiB/s Average min max 00:12:48.680 PCIE (0000:00:13.0) NSID 1 from core 1: 4808.62 18.78 3326.65 1342.09 6300.31 00:12:48.680 PCIE (0000:00:10.0) NSID 1 from core 1: 4808.62 18.78 3324.75 1297.19 6380.29 00:12:48.680 PCIE (0000:00:11.0) NSID 1 from core 1: 4808.62 18.78 3326.22 1325.41 6422.78 00:12:48.680 PCIE (0000:00:12.0) NSID 1 from core 1: 4808.62 18.78 3325.89 1179.27 7244.18 00:12:48.680 PCIE (0000:00:12.0) NSID 2 from core 1: 4808.62 18.78 3325.65 1203.15 7192.18 00:12:48.680 PCIE (0000:00:12.0) NSID 3 from core 1: 4808.62 18.78 3325.65 1192.03 7122.44 00:12:48.680 ======================================================== 00:12:48.680 Total : 28851.75 112.70 3325.80 1179.27 7244.18 00:12:48.680 00:12:48.680 15:39:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76771 00:12:49.612 Initializing NVMe Controllers 00:12:49.612 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:49.612 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:49.612 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:49.612 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:49.612 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:49.612 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:49.612 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:49.612 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:49.612 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:49.612 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:49.612 Initialization complete. Launching workers. 00:12:49.612 ======================================================== 00:12:49.612 Latency(us) 00:12:49.612 Device Information : IOPS MiB/s Average min max 00:12:49.612 PCIE (0000:00:13.0) NSID 1 from core 0: 7340.31 28.67 2179.25 1034.67 15948.98 00:12:49.612 PCIE (0000:00:10.0) NSID 1 from core 0: 7340.31 28.67 2177.94 1031.26 16006.70 00:12:49.612 PCIE (0000:00:11.0) NSID 1 from core 0: 7340.31 28.67 2179.17 1000.48 16426.11 00:12:49.612 PCIE (0000:00:12.0) NSID 1 from core 0: 7340.31 28.67 2179.11 836.87 17009.25 00:12:49.612 PCIE (0000:00:12.0) NSID 2 from core 0: 7340.31 28.67 2179.04 749.75 15628.75 00:12:49.612 PCIE (0000:00:12.0) NSID 3 from core 0: 7340.31 28.67 2178.96 592.49 16056.49 00:12:49.612 ======================================================== 00:12:49.612 Total : 44041.87 172.04 2178.91 592.49 17009.25 00:12:49.612 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76772 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76847 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76848 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:49.870 15:39:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:53.153 Initializing NVMe Controllers 00:12:53.153 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:53.153 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:53.153 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:53.153 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:53.153 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:53.153 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:53.153 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:53.153 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:53.153 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:53.153 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:53.153 Initialization complete. Launching workers. 00:12:53.153 ======================================================== 00:12:53.153 Latency(us) 00:12:53.153 Device Information : IOPS MiB/s Average min max 00:12:53.153 PCIE (0000:00:13.0) NSID 1 from core 1: 5325.46 20.80 3003.72 1200.62 6632.97 00:12:53.153 PCIE (0000:00:10.0) NSID 1 from core 1: 5325.46 20.80 3002.09 1150.29 7179.11 00:12:53.153 PCIE (0000:00:11.0) NSID 1 from core 1: 5325.46 20.80 3003.50 1189.67 7125.97 00:12:53.153 PCIE (0000:00:12.0) NSID 1 from core 1: 5325.46 20.80 3003.55 1152.29 6600.67 00:12:53.153 PCIE (0000:00:12.0) NSID 2 from core 1: 5325.46 20.80 3003.35 1175.40 6703.04 00:12:53.154 PCIE (0000:00:12.0) NSID 3 from core 1: 5325.46 20.80 3003.16 1160.65 6346.02 00:12:53.154 ======================================================== 00:12:53.154 Total : 31952.77 124.82 3003.23 1150.29 7179.11 00:12:53.154 00:12:53.154 Initializing NVMe Controllers 00:12:53.154 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:53.154 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:53.154 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:53.154 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:53.154 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:53.154 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:53.154 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:53.154 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:53.154 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:53.154 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:53.154 Initialization complete. Launching workers. 00:12:53.154 ======================================================== 00:12:53.154 Latency(us) 00:12:53.154 Device Information : IOPS MiB/s Average min max 00:12:53.154 PCIE (0000:00:13.0) NSID 1 from core 0: 5181.30 20.24 3087.36 1124.94 11392.14 00:12:53.154 PCIE (0000:00:10.0) NSID 1 from core 0: 5181.30 20.24 3085.67 1096.65 10991.09 00:12:53.154 PCIE (0000:00:11.0) NSID 1 from core 0: 5181.30 20.24 3087.00 1097.63 11006.73 00:12:53.154 PCIE (0000:00:12.0) NSID 1 from core 0: 5181.30 20.24 3086.79 802.05 10716.53 00:12:53.154 PCIE (0000:00:12.0) NSID 2 from core 0: 5181.30 20.24 3086.54 709.29 11187.87 00:12:53.154 PCIE (0000:00:12.0) NSID 3 from core 0: 5181.30 20.24 3086.25 598.42 11258.80 00:12:53.154 ======================================================== 00:12:53.154 Total : 31087.80 121.44 3086.60 598.42 11392.14 00:12:53.154 00:12:55.075 Initializing NVMe Controllers 00:12:55.075 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:55.075 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:55.075 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:55.075 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:55.075 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:55.075 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:55.075 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:55.075 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:55.075 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:55.075 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:55.075 Initialization complete. Launching workers. 00:12:55.075 ======================================================== 00:12:55.075 Latency(us) 00:12:55.075 Device Information : IOPS MiB/s Average min max 00:12:55.075 PCIE (0000:00:13.0) NSID 1 from core 2: 3556.64 13.89 4497.98 1119.92 14748.29 00:12:55.075 PCIE (0000:00:10.0) NSID 1 from core 2: 3556.64 13.89 4495.61 1104.49 13868.43 00:12:55.075 PCIE (0000:00:11.0) NSID 1 from core 2: 3556.64 13.89 4499.09 1042.96 16857.21 00:12:55.075 PCIE (0000:00:12.0) NSID 1 from core 2: 3556.64 13.89 4498.57 1107.49 16577.10 00:12:55.075 PCIE (0000:00:12.0) NSID 2 from core 2: 3556.64 13.89 4498.60 1127.08 13575.38 00:12:55.075 PCIE (0000:00:12.0) NSID 3 from core 2: 3556.64 13.89 4498.03 1126.00 14294.49 00:12:55.075 ======================================================== 00:12:55.075 Total : 21339.86 83.36 4497.98 1042.96 16857.21 00:12:55.075 00:12:55.333 ************************************ 00:12:55.333 END TEST nvme_multi_secondary 00:12:55.333 ************************************ 00:12:55.333 15:39:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76847 00:12:55.334 15:39:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76848 00:12:55.334 00:12:55.334 real 0m10.750s 00:12:55.334 user 0m18.449s 00:12:55.334 sys 0m0.861s 00:12:55.334 15:39:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.334 15:39:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:55.334 15:39:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:55.334 15:39:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/75781 ]] 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1094 -- # kill 75781 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1095 -- # wait 75781 00:12:55.334 [2024-12-06 15:39:43.852236] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.852397] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.852445] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.852485] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.854026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.854118] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.854156] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.854195] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.855468] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.855561] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.855611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.855665] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.856849] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.856991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.857033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 [2024-12-06 15:39:43.857071] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76720) is not found. Dropping the request. 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:55.334 15:39:43 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.334 15:39:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.334 ************************************ 00:12:55.334 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:55.334 ************************************ 00:12:55.334 15:39:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:55.592 * Looking for test storage... 00:12:55.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.593 --rc genhtml_branch_coverage=1 00:12:55.593 --rc genhtml_function_coverage=1 00:12:55.593 --rc genhtml_legend=1 00:12:55.593 --rc geninfo_all_blocks=1 00:12:55.593 --rc geninfo_unexecuted_blocks=1 00:12:55.593 00:12:55.593 ' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.593 --rc genhtml_branch_coverage=1 00:12:55.593 --rc genhtml_function_coverage=1 00:12:55.593 --rc genhtml_legend=1 00:12:55.593 --rc geninfo_all_blocks=1 00:12:55.593 --rc geninfo_unexecuted_blocks=1 00:12:55.593 00:12:55.593 ' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.593 --rc genhtml_branch_coverage=1 00:12:55.593 --rc genhtml_function_coverage=1 00:12:55.593 --rc genhtml_legend=1 00:12:55.593 --rc geninfo_all_blocks=1 00:12:55.593 --rc geninfo_unexecuted_blocks=1 00:12:55.593 00:12:55.593 ' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:55.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.593 --rc genhtml_branch_coverage=1 00:12:55.593 --rc genhtml_function_coverage=1 00:12:55.593 --rc genhtml_legend=1 00:12:55.593 --rc geninfo_all_blocks=1 00:12:55.593 --rc geninfo_unexecuted_blocks=1 00:12:55.593 00:12:55.593 ' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:55.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77003 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77003 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77003 ']' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.593 15:39:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 [2024-12-06 15:39:44.351851] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:12:55.852 [2024-12-06 15:39:44.352098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77003 ] 00:12:55.852 [2024-12-06 15:39:44.532635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.111 [2024-12-06 15:39:44.590069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.111 [2024-12-06 15:39:44.590209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.111 [2024-12-06 15:39:44.590287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.111 [2024-12-06 15:39:44.590339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:57.055 nvme0n1 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_cf25X.txt 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:57.055 true 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733499585 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77032 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:57.055 15:39:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:58.960 [2024-12-06 15:39:47.487532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:58.960 [2024-12-06 15:39:47.488015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:58.960 [2024-12-06 15:39:47.488051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:58.960 [2024-12-06 15:39:47.488075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.960 [2024-12-06 15:39:47.490725] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:58.960 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77032 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77032 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77032 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_cf25X.txt 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:58.960 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_cf25X.txt 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77003 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77003 ']' 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77003 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77003 00:12:58.961 killing process with pid 77003 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77003' 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77003 00:12:58.961 15:39:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77003 00:12:59.529 15:39:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:59.529 15:39:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:59.529 00:12:59.529 real 0m4.150s 00:12:59.529 user 0m14.627s 00:12:59.529 sys 0m0.739s 00:12:59.529 ************************************ 00:12:59.529 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:59.529 ************************************ 00:12:59.529 15:39:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.529 15:39:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:59.529 15:39:48 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:59.529 15:39:48 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:59.529 15:39:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.529 15:39:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.529 15:39:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.529 ************************************ 00:12:59.529 START TEST nvme_fio 00:12:59.529 ************************************ 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:59.529 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:59.529 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:59.529 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:59.529 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:59.788 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:59.788 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:59.788 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:59.788 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:59.788 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:59.788 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:59.788 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:00.047 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:00.047 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:00.305 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:00.305 15:39:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:00.305 15:39:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:00.305 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:00.305 fio-3.35 00:13:00.305 Starting 1 thread 00:13:03.592 00:13:03.592 test: (groupid=0, jobs=1): err= 0: pid=77161: Fri Dec 6 15:39:51 2024 00:13:03.592 read: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(116MiB/2001msec) 00:13:03.592 slat (nsec): min=4310, max=67739, avg=7275.75, stdev=3962.60 00:13:03.592 clat (usec): min=253, max=9703, avg=4297.47, stdev=541.60 00:13:03.592 lat (usec): min=260, max=9771, avg=4304.75, stdev=542.41 00:13:03.592 clat percentiles (usec): 00:13:03.592 | 1.00th=[ 3621], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3916], 00:13:03.592 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:13:03.592 | 70.00th=[ 4424], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5014], 00:13:03.592 | 99.00th=[ 6259], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 8717], 00:13:03.592 | 99.99th=[ 9634] 00:13:03.592 bw ( KiB/s): min=56678, max=61216, per=98.83%, avg=58559.33, stdev=2366.27, samples=3 00:13:03.592 iops : min=14169, max=15304, avg=14639.67, stdev=591.77, samples=3 00:13:03.592 write: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(116MiB/2001msec); 0 zone resets 00:13:03.592 slat (nsec): min=4477, max=53554, avg=7514.41, stdev=4031.42 00:13:03.592 clat (usec): min=415, max=9602, avg=4311.41, stdev=541.17 00:13:03.592 lat (usec): min=421, max=9618, avg=4318.93, stdev=541.94 00:13:03.592 clat percentiles (usec): 00:13:03.592 | 1.00th=[ 3621], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3949], 00:13:03.592 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:13:03.592 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5080], 00:13:03.592 | 99.00th=[ 6325], 99.50th=[ 7373], 99.90th=[ 8717], 99.95th=[ 8717], 00:13:03.592 | 99.99th=[ 9372] 00:13:03.592 bw ( KiB/s): min=57069, max=60200, per=98.46%, avg=58396.33, stdev=1618.94, samples=3 00:13:03.592 iops : min=14267, max=15050, avg=14599.00, stdev=404.84, samples=3 00:13:03.592 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:03.592 lat (msec) : 2=0.05%, 4=28.47%, 10=71.43% 00:13:03.592 cpu : usr=98.60%, sys=0.20%, ctx=6, majf=0, minf=626 00:13:03.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:03.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:03.592 issued rwts: total=29642,29670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:03.592 00:13:03.592 Run status group 0 (all jobs): 00:13:03.592 READ: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=116MiB (121MB), run=2001-2001msec 00:13:03.592 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:03.592 ----------------------------------------------------- 00:13:03.593 Suppressions used: 00:13:03.593 count bytes template 00:13:03.593 1 32 /usr/src/fio/parse.c 00:13:03.593 1 8 libtcmalloc_minimal.so 00:13:03.593 ----------------------------------------------------- 00:13:03.593 00:13:03.593 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:03.593 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:03.593 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:03.593 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:03.851 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:03.851 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:04.110 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:04.110 15:39:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:04.110 15:39:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:04.368 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:04.368 fio-3.35 00:13:04.368 Starting 1 thread 00:13:07.653 00:13:07.653 test: (groupid=0, jobs=1): err= 0: pid=77222: Fri Dec 6 15:39:55 2024 00:13:07.653 read: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec) 00:13:07.653 slat (usec): min=4, max=170, avg= 6.83, stdev= 4.02 00:13:07.653 clat (usec): min=322, max=11213, avg=4116.16, stdev=348.65 00:13:07.653 lat (usec): min=328, max=11287, avg=4122.99, stdev=349.21 00:13:07.653 clat percentiles (usec): 00:13:07.653 | 1.00th=[ 3589], 5.00th=[ 3720], 10.00th=[ 3785], 20.00th=[ 3884], 00:13:07.653 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:13:07.653 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:13:07.653 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 7111], 99.95th=[ 9503], 00:13:07.653 | 99.99th=[10945] 00:13:07.653 bw ( KiB/s): min=57992, max=64256, per=99.56%, avg=61573.33, stdev=3227.25, samples=3 00:13:07.653 iops : min=14498, max=16064, avg=15393.33, stdev=806.81, samples=3 00:13:07.653 write: IOPS=15.5k, BW=60.4MiB/s (63.4MB/s)(121MiB/2001msec); 0 zone resets 00:13:07.653 slat (nsec): min=4474, max=65442, avg=7074.89, stdev=3915.63 00:13:07.653 clat (usec): min=259, max=11020, avg=4133.53, stdev=351.66 00:13:07.653 lat (usec): min=280, max=11035, avg=4140.61, stdev=352.13 00:13:07.653 clat percentiles (usec): 00:13:07.653 | 1.00th=[ 3589], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3916], 00:13:07.653 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:13:07.653 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4686], 00:13:07.653 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 7898], 99.95th=[ 9765], 00:13:07.653 | 99.99th=[10683] 00:13:07.653 bw ( KiB/s): min=58216, max=63128, per=98.79%, avg=61138.67, stdev=2585.59, samples=3 00:13:07.653 iops : min=14554, max=15782, avg=15284.67, stdev=646.40, samples=3 00:13:07.653 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:07.653 lat (msec) : 2=0.05%, 4=33.91%, 10=65.96%, 20=0.04% 00:13:07.653 cpu : usr=98.65%, sys=0.25%, ctx=6, majf=0, minf=626 00:13:07.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:07.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.653 issued rwts: total=30938,30958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.653 00:13:07.653 Run status group 0 (all jobs): 00:13:07.653 READ: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:13:07.653 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:13:07.653 ----------------------------------------------------- 00:13:07.653 Suppressions used: 00:13:07.653 count bytes template 00:13:07.653 1 32 /usr/src/fio/parse.c 00:13:07.653 1 8 libtcmalloc_minimal.so 00:13:07.653 ----------------------------------------------------- 00:13:07.653 00:13:07.653 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:07.653 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:07.653 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:07.653 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:07.911 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:07.911 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:08.170 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:08.170 15:39:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:08.170 15:39:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:08.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:08.428 fio-3.35 00:13:08.428 Starting 1 thread 00:13:11.712 00:13:11.712 test: (groupid=0, jobs=1): err= 0: pid=77288: Fri Dec 6 15:40:00 2024 00:13:11.712 read: IOPS=17.6k, BW=68.6MiB/s (72.0MB/s)(137MiB/2001msec) 00:13:11.712 slat (nsec): min=4493, max=82250, avg=6237.05, stdev=2902.53 00:13:11.712 clat (usec): min=248, max=12445, avg=3622.96, stdev=428.53 00:13:11.712 lat (usec): min=254, max=12528, avg=3629.20, stdev=429.14 00:13:11.712 clat percentiles (usec): 00:13:11.712 | 1.00th=[ 3130], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3359], 00:13:11.712 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3621], 00:13:11.712 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 3982], 95.00th=[ 4293], 00:13:11.712 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 8455], 99.95th=[10552], 00:13:11.712 | 99.99th=[12125] 00:13:11.712 bw ( KiB/s): min=61816, max=73152, per=98.41%, avg=69162.67, stdev=6370.24, samples=3 00:13:11.712 iops : min=15454, max=18288, avg=17290.67, stdev=1592.56, samples=3 00:13:11.712 write: IOPS=17.6k, BW=68.7MiB/s (72.0MB/s)(137MiB/2001msec); 0 zone resets 00:13:11.712 slat (nsec): min=4780, max=56178, avg=6481.32, stdev=2921.01 00:13:11.712 clat (usec): min=219, max=12257, avg=3635.09, stdev=438.88 00:13:11.712 lat (usec): min=225, max=12275, avg=3641.57, stdev=439.42 00:13:11.712 clat percentiles (usec): 00:13:11.712 | 1.00th=[ 3130], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3392], 00:13:11.712 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3621], 00:13:11.712 | 70.00th=[ 3687], 80.00th=[ 3818], 90.00th=[ 4015], 95.00th=[ 4293], 00:13:11.712 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 9241], 99.95th=[10683], 00:13:11.712 | 99.99th=[11994] 00:13:11.712 bw ( KiB/s): min=62168, max=72992, per=98.22%, avg=69080.00, stdev=6003.31, samples=3 00:13:11.712 iops : min=15542, max=18248, avg=17270.00, stdev=1500.83, samples=3 00:13:11.712 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:13:11.712 lat (msec) : 2=0.06%, 4=90.01%, 10=9.82%, 20=0.07% 00:13:11.712 cpu : usr=98.90%, sys=0.15%, ctx=4, majf=0, minf=627 00:13:11.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:11.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:11.713 issued rwts: total=35158,35184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:11.713 00:13:11.713 Run status group 0 (all jobs): 00:13:11.713 READ: bw=68.6MiB/s (72.0MB/s), 68.6MiB/s-68.6MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:13:11.713 WRITE: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2001-2001msec 00:13:11.971 ----------------------------------------------------- 00:13:11.971 Suppressions used: 00:13:11.971 count bytes template 00:13:11.971 1 32 /usr/src/fio/parse.c 00:13:11.971 1 8 libtcmalloc_minimal.so 00:13:11.971 ----------------------------------------------------- 00:13:11.971 00:13:11.971 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:11.971 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:11.971 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:11.971 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:12.230 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:12.230 15:40:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:12.489 15:40:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:12.489 15:40:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:12.489 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:12.489 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:12.748 15:40:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:12.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:12.748 fio-3.35 00:13:12.748 Starting 1 thread 00:13:16.965 00:13:16.965 test: (groupid=0, jobs=1): err= 0: pid=77354: Fri Dec 6 15:40:04 2024 00:13:16.965 read: IOPS=15.5k, BW=60.5MiB/s (63.4MB/s)(121MiB/2001msec) 00:13:16.965 slat (nsec): min=4281, max=74639, avg=6858.22, stdev=3513.84 00:13:16.965 clat (usec): min=216, max=10212, avg=4108.51, stdev=687.10 00:13:16.965 lat (usec): min=222, max=10287, avg=4115.36, stdev=688.17 00:13:16.965 clat percentiles (usec): 00:13:16.965 | 1.00th=[ 3261], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3621], 00:13:16.965 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 3982], 00:13:16.965 | 70.00th=[ 4146], 80.00th=[ 4621], 90.00th=[ 5145], 95.00th=[ 5407], 00:13:16.965 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 8586], 00:13:16.965 | 99.99th=[10028] 00:13:16.965 bw ( KiB/s): min=56976, max=62952, per=97.36%, avg=60325.33, stdev=3052.84, samples=3 00:13:16.965 iops : min=14244, max=15738, avg=15081.33, stdev=763.21, samples=3 00:13:16.965 write: IOPS=15.5k, BW=60.5MiB/s (63.5MB/s)(121MiB/2001msec); 0 zone resets 00:13:16.965 slat (nsec): min=4400, max=83096, avg=7207.18, stdev=3788.52 00:13:16.965 clat (usec): min=234, max=10049, avg=4123.33, stdev=692.02 00:13:16.965 lat (usec): min=240, max=10074, avg=4130.54, stdev=693.11 00:13:16.965 clat percentiles (usec): 00:13:16.965 | 1.00th=[ 3261], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3654], 00:13:16.965 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3982], 00:13:16.965 | 70.00th=[ 4146], 80.00th=[ 4621], 90.00th=[ 5145], 95.00th=[ 5473], 00:13:16.965 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 8717], 00:13:16.965 | 99.99th=[ 9765] 00:13:16.965 bw ( KiB/s): min=57288, max=62552, per=96.77%, avg=59997.33, stdev=2635.41, samples=3 00:13:16.965 iops : min=14322, max=15638, avg=14999.33, stdev=658.85, samples=3 00:13:16.965 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:16.965 lat (msec) : 2=0.06%, 4=61.05%, 10=38.85%, 20=0.01% 00:13:16.965 cpu : usr=98.55%, sys=0.20%, ctx=17, majf=0, minf=625 00:13:16.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:16.965 issued rwts: total=30995,31016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:16.965 00:13:16.965 Run status group 0 (all jobs): 00:13:16.965 READ: bw=60.5MiB/s (63.4MB/s), 60.5MiB/s-60.5MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:13:16.965 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=121MiB (127MB), run=2001-2001msec 00:13:16.965 ----------------------------------------------------- 00:13:16.965 Suppressions used: 00:13:16.965 count bytes template 00:13:16.965 1 32 /usr/src/fio/parse.c 00:13:16.965 1 8 libtcmalloc_minimal.so 00:13:16.965 ----------------------------------------------------- 00:13:16.965 00:13:16.965 ************************************ 00:13:16.965 END TEST nvme_fio 00:13:16.965 ************************************ 00:13:16.965 15:40:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:16.965 15:40:05 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:16.965 00:13:16.965 real 0m16.909s 00:13:16.965 user 0m13.587s 00:13:16.965 sys 0m2.030s 00:13:16.965 15:40:05 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.965 15:40:05 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.965 ************************************ 00:13:16.965 END TEST nvme 00:13:16.965 ************************************ 00:13:16.965 00:13:16.965 real 1m28.489s 00:13:16.965 user 3m36.960s 00:13:16.965 sys 0m14.565s 00:13:16.965 15:40:05 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.965 15:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.965 15:40:05 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:16.965 15:40:05 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:16.965 15:40:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.965 15:40:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.965 15:40:05 -- common/autotest_common.sh@10 -- # set +x 00:13:16.965 ************************************ 00:13:16.965 START TEST nvme_scc 00:13:16.965 ************************************ 00:13:16.965 15:40:05 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:16.965 * Looking for test storage... 00:13:16.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:16.965 15:40:05 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:16.965 15:40:05 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:16.965 15:40:05 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:16.965 15:40:05 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:16.965 15:40:05 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:16.966 15:40:05 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.966 15:40:05 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:16.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.966 --rc genhtml_branch_coverage=1 00:13:16.966 --rc genhtml_function_coverage=1 00:13:16.966 --rc genhtml_legend=1 00:13:16.966 --rc geninfo_all_blocks=1 00:13:16.966 --rc geninfo_unexecuted_blocks=1 00:13:16.966 00:13:16.966 ' 00:13:16.966 15:40:05 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:16.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.966 --rc genhtml_branch_coverage=1 00:13:16.966 --rc genhtml_function_coverage=1 00:13:16.966 --rc genhtml_legend=1 00:13:16.966 --rc geninfo_all_blocks=1 00:13:16.966 --rc geninfo_unexecuted_blocks=1 00:13:16.966 00:13:16.966 ' 00:13:16.966 15:40:05 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:16.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.966 --rc genhtml_branch_coverage=1 00:13:16.966 --rc genhtml_function_coverage=1 00:13:16.966 --rc genhtml_legend=1 00:13:16.966 --rc geninfo_all_blocks=1 00:13:16.966 --rc geninfo_unexecuted_blocks=1 00:13:16.966 00:13:16.966 ' 00:13:16.966 15:40:05 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:16.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.966 --rc genhtml_branch_coverage=1 00:13:16.966 --rc genhtml_function_coverage=1 00:13:16.966 --rc genhtml_legend=1 00:13:16.966 --rc geninfo_all_blocks=1 00:13:16.966 --rc geninfo_unexecuted_blocks=1 00:13:16.966 00:13:16.966 ' 00:13:16.966 15:40:05 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.966 15:40:05 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.966 15:40:05 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.966 15:40:05 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.966 15:40:05 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.966 15:40:05 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:16.966 15:40:05 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:16.966 15:40:05 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:16.966 15:40:05 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.966 15:40:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:16.966 15:40:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:16.966 15:40:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:16.966 15:40:05 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:17.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:17.484 Waiting for block devices as requested 00:13:17.484 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:17.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:17.742 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:17.742 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.027 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:23.027 15:40:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:23.027 15:40:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.027 15:40:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:23.027 15:40:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.027 15:40:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.027 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.028 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.029 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.030 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.031 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.032 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:23.033 15:40:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.033 15:40:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:23.033 15:40:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.033 15:40:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.033 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:23.034 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.035 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.036 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:23.318 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:23.318 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.318 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.318 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.318 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.319 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.320 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:23.321 15:40:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.321 15:40:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:23.321 15:40:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.321 15:40:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.321 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:23.322 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.323 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:23.324 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:23.325 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.326 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:23.327 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.328 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.329 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:23.591 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.592 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.593 15:40:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.594 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:23.595 15:40:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:23.595 15:40:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:23.595 15:40:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:23.595 15:40:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.595 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.596 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:23.597 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:23.598 15:40:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:23.598 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:23.599 15:40:12 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:23.599 15:40:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:23.599 15:40:12 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:23.599 15:40:12 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.734 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.734 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.734 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.993 15:40:13 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:24.993 15:40:13 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:24.993 15:40:13 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.993 15:40:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:24.993 ************************************ 00:13:24.993 START TEST nvme_simple_copy 00:13:24.993 ************************************ 00:13:24.993 15:40:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:25.252 Initializing NVMe Controllers 00:13:25.252 Attaching to 0000:00:10.0 00:13:25.252 Controller supports SCC. Attached to 0000:00:10.0 00:13:25.252 Namespace ID: 1 size: 6GB 00:13:25.252 Initialization complete. 00:13:25.252 00:13:25.252 Controller QEMU NVMe Ctrl (12340 ) 00:13:25.252 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:25.252 Namespace Block Size:4096 00:13:25.252 Writing LBAs 0 to 63 with Random Data 00:13:25.252 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:25.252 LBAs matching Written Data: 64 00:13:25.252 00:13:25.252 real 0m0.297s 00:13:25.252 user 0m0.116s 00:13:25.252 sys 0m0.079s 00:13:25.252 15:40:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.252 ************************************ 00:13:25.252 15:40:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.252 END TEST nvme_simple_copy 00:13:25.252 ************************************ 00:13:25.252 ************************************ 00:13:25.252 END TEST nvme_scc 00:13:25.252 ************************************ 00:13:25.252 00:13:25.252 real 0m8.650s 00:13:25.252 user 0m1.650s 00:13:25.252 sys 0m1.857s 00:13:25.252 15:40:13 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.252 15:40:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:25.252 15:40:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:25.252 15:40:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:25.252 15:40:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:25.252 15:40:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:25.252 15:40:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:25.252 15:40:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.252 15:40:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.252 15:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:25.252 ************************************ 00:13:25.252 START TEST nvme_fdp 00:13:25.252 ************************************ 00:13:25.252 15:40:13 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:25.511 * Looking for test storage... 00:13:25.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:25.512 15:40:13 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:25.512 15:40:13 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:13:25.512 15:40:13 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:25.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.512 --rc genhtml_branch_coverage=1 00:13:25.512 --rc genhtml_function_coverage=1 00:13:25.512 --rc genhtml_legend=1 00:13:25.512 --rc geninfo_all_blocks=1 00:13:25.512 --rc geninfo_unexecuted_blocks=1 00:13:25.512 00:13:25.512 ' 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:25.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.512 --rc genhtml_branch_coverage=1 00:13:25.512 --rc genhtml_function_coverage=1 00:13:25.512 --rc genhtml_legend=1 00:13:25.512 --rc geninfo_all_blocks=1 00:13:25.512 --rc geninfo_unexecuted_blocks=1 00:13:25.512 00:13:25.512 ' 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:25.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.512 --rc genhtml_branch_coverage=1 00:13:25.512 --rc genhtml_function_coverage=1 00:13:25.512 --rc genhtml_legend=1 00:13:25.512 --rc geninfo_all_blocks=1 00:13:25.512 --rc geninfo_unexecuted_blocks=1 00:13:25.512 00:13:25.512 ' 00:13:25.512 15:40:14 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:25.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.512 --rc genhtml_branch_coverage=1 00:13:25.512 --rc genhtml_function_coverage=1 00:13:25.512 --rc genhtml_legend=1 00:13:25.512 --rc geninfo_all_blocks=1 00:13:25.512 --rc geninfo_unexecuted_blocks=1 00:13:25.512 00:13:25.512 ' 00:13:25.512 15:40:14 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.512 15:40:14 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.512 15:40:14 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.512 15:40:14 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.512 15:40:14 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.512 15:40:14 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:25.512 15:40:14 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:25.512 15:40:14 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:25.512 15:40:14 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:25.512 15:40:14 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:26.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:26.080 Waiting for block devices as requested 00:13:26.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.340 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.340 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.340 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.626 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.626 15:40:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:31.626 15:40:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:31.626 15:40:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:31.626 15:40:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.626 15:40:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.626 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:31.627 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.628 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.629 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.630 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.631 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:31.632 15:40:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:31.632 15:40:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:31.632 15:40:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.632 15:40:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.632 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.633 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:31.634 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.635 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.898 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:31.899 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.900 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:31.901 15:40:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:31.901 15:40:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:31.901 15:40:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.901 15:40:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.901 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.902 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:31.903 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.904 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.905 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.906 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.170 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.171 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:32.172 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.173 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:32.174 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.175 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.176 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:32.177 15:40:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:32.177 15:40:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:32.177 15:40:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:32.177 15:40:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:32.177 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.178 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:32.179 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:32.439 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:32.440 15:40:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:32.440 15:40:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:32.441 15:40:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:32.441 15:40:20 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:32.441 15:40:20 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:33.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.577 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.577 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.577 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.577 15:40:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:33.577 15:40:22 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:33.577 15:40:22 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.577 15:40:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:33.577 ************************************ 00:13:33.577 START TEST nvme_flexible_data_placement 00:13:33.577 ************************************ 00:13:33.577 15:40:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:33.835 Initializing NVMe Controllers 00:13:33.835 Attaching to 0000:00:13.0 00:13:33.835 Controller supports FDP Attached to 0000:00:13.0 00:13:33.835 Namespace ID: 1 Endurance Group ID: 1 00:13:33.835 Initialization complete. 00:13:33.835 00:13:33.835 ================================== 00:13:33.835 == FDP tests for Namespace: #01 == 00:13:33.835 ================================== 00:13:33.835 00:13:33.835 Get Feature: FDP: 00:13:33.835 ================= 00:13:33.835 Enabled: Yes 00:13:33.835 FDP configuration Index: 0 00:13:33.835 00:13:33.835 FDP configurations log page 00:13:33.835 =========================== 00:13:33.835 Number of FDP configurations: 1 00:13:33.835 Version: 0 00:13:33.835 Size: 112 00:13:33.835 FDP Configuration Descriptor: 0 00:13:33.835 Descriptor Size: 96 00:13:33.835 Reclaim Group Identifier format: 2 00:13:33.835 FDP Volatile Write Cache: Not Present 00:13:33.835 FDP Configuration: Valid 00:13:33.835 Vendor Specific Size: 0 00:13:33.835 Number of Reclaim Groups: 2 00:13:33.835 Number of Recalim Unit Handles: 8 00:13:33.835 Max Placement Identifiers: 128 00:13:33.835 Number of Namespaces Suppprted: 256 00:13:33.835 Reclaim unit Nominal Size: 6000000 bytes 00:13:33.835 Estimated Reclaim Unit Time Limit: Not Reported 00:13:33.835 RUH Desc #000: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #001: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #002: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #003: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #004: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #005: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #006: RUH Type: Initially Isolated 00:13:33.836 RUH Desc #007: RUH Type: Initially Isolated 00:13:33.836 00:13:33.836 FDP reclaim unit handle usage log page 00:13:33.836 ====================================== 00:13:33.836 Number of Reclaim Unit Handles: 8 00:13:33.836 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:33.836 RUH Usage Desc #001: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #002: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #003: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #004: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #005: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #006: RUH Attributes: Unused 00:13:33.836 RUH Usage Desc #007: RUH Attributes: Unused 00:13:33.836 00:13:33.836 FDP statistics log page 00:13:33.836 ======================= 00:13:33.836 Host bytes with metadata written: 1368170496 00:13:33.836 Media bytes with metadata written: 1369100288 00:13:33.836 Media bytes erased: 0 00:13:33.836 00:13:33.836 FDP Reclaim unit handle status 00:13:33.836 ============================== 00:13:33.836 Number of RUHS descriptors: 2 00:13:33.836 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002736 00:13:33.836 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:33.836 00:13:33.836 FDP write on placement id: 0 success 00:13:33.836 00:13:33.836 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:33.836 00:13:33.836 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:33.836 00:13:33.836 Get Feature: FDP Events for Placement handle: #0 00:13:33.836 ======================== 00:13:33.836 Number of FDP Events: 6 00:13:33.836 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:33.836 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:33.836 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:33.836 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:33.836 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:33.836 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:33.836 00:13:33.836 FDP events log page 00:13:33.836 =================== 00:13:33.836 Number of FDP events: 1 00:13:33.836 FDP Event #0: 00:13:33.836 Event Type: RU Not Written to Capacity 00:13:33.836 Placement Identifier: Valid 00:13:33.836 NSID: Valid 00:13:33.836 Location: Valid 00:13:33.836 Placement Identifier: 0 00:13:33.836 Event Timestamp: 4 00:13:33.836 Namespace Identifier: 1 00:13:33.836 Reclaim Group Identifier: 0 00:13:33.836 Reclaim Unit Handle Identifier: 0 00:13:33.836 00:13:33.836 FDP test passed 00:13:33.836 00:13:33.836 real 0m0.266s 00:13:33.836 user 0m0.083s 00:13:33.836 sys 0m0.082s 00:13:33.836 15:40:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.836 15:40:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:33.836 ************************************ 00:13:33.836 END TEST nvme_flexible_data_placement 00:13:33.836 ************************************ 00:13:34.107 ************************************ 00:13:34.107 END TEST nvme_fdp 00:13:34.107 ************************************ 00:13:34.107 00:13:34.107 real 0m8.647s 00:13:34.107 user 0m1.653s 00:13:34.107 sys 0m1.869s 00:13:34.107 15:40:22 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.107 15:40:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:34.107 15:40:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:34.107 15:40:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:34.107 15:40:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.107 15:40:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.107 15:40:22 -- common/autotest_common.sh@10 -- # set +x 00:13:34.107 ************************************ 00:13:34.107 START TEST nvme_rpc 00:13:34.107 ************************************ 00:13:34.107 15:40:22 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:34.107 * Looking for test storage... 00:13:34.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:34.107 15:40:22 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:34.107 15:40:22 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:34.107 15:40:22 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:34.107 15:40:22 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.107 15:40:22 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:34.366 15:40:22 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:34.366 15:40:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.366 15:40:22 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.367 15:40:22 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:34.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.367 --rc genhtml_branch_coverage=1 00:13:34.367 --rc genhtml_function_coverage=1 00:13:34.367 --rc genhtml_legend=1 00:13:34.367 --rc geninfo_all_blocks=1 00:13:34.367 --rc geninfo_unexecuted_blocks=1 00:13:34.367 00:13:34.367 ' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:34.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.367 --rc genhtml_branch_coverage=1 00:13:34.367 --rc genhtml_function_coverage=1 00:13:34.367 --rc genhtml_legend=1 00:13:34.367 --rc geninfo_all_blocks=1 00:13:34.367 --rc geninfo_unexecuted_blocks=1 00:13:34.367 00:13:34.367 ' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:34.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.367 --rc genhtml_branch_coverage=1 00:13:34.367 --rc genhtml_function_coverage=1 00:13:34.367 --rc genhtml_legend=1 00:13:34.367 --rc geninfo_all_blocks=1 00:13:34.367 --rc geninfo_unexecuted_blocks=1 00:13:34.367 00:13:34.367 ' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:34.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.367 --rc genhtml_branch_coverage=1 00:13:34.367 --rc genhtml_function_coverage=1 00:13:34.367 --rc genhtml_legend=1 00:13:34.367 --rc geninfo_all_blocks=1 00:13:34.367 --rc geninfo_unexecuted_blocks=1 00:13:34.367 00:13:34.367 ' 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:34.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78748 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:34.367 15:40:22 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78748 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 78748 ']' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.367 15:40:22 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 [2024-12-06 15:40:23.026167] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:13:34.367 [2024-12-06 15:40:23.026643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:13:34.626 [2024-12-06 15:40:23.184391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.626 [2024-12-06 15:40:23.250843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.626 [2024-12-06 15:40:23.250880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.561 15:40:24 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.561 15:40:24 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:35.561 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:35.819 Nvme0n1 00:13:35.819 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:35.819 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:36.077 request: 00:13:36.077 { 00:13:36.077 "bdev_name": "Nvme0n1", 00:13:36.077 "filename": "non_existing_file", 00:13:36.077 "method": "bdev_nvme_apply_firmware", 00:13:36.077 "req_id": 1 00:13:36.077 } 00:13:36.077 Got JSON-RPC error response 00:13:36.077 response: 00:13:36.077 { 00:13:36.077 "code": -32603, 00:13:36.077 "message": "open file failed." 00:13:36.077 } 00:13:36.077 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:36.077 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:36.077 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:36.335 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:36.335 15:40:24 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78748 00:13:36.335 15:40:24 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 78748 ']' 00:13:36.335 15:40:24 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 78748 00:13:36.335 15:40:24 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:36.335 15:40:24 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.335 15:40:24 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78748 00:13:36.335 killing process with pid 78748 00:13:36.335 15:40:25 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.335 15:40:25 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.335 15:40:25 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78748' 00:13:36.335 15:40:25 nvme_rpc -- common/autotest_common.sh@973 -- # kill 78748 00:13:36.335 15:40:25 nvme_rpc -- common/autotest_common.sh@978 -- # wait 78748 00:13:37.266 ************************************ 00:13:37.266 END TEST nvme_rpc 00:13:37.266 ************************************ 00:13:37.266 00:13:37.266 real 0m3.068s 00:13:37.266 user 0m5.895s 00:13:37.266 sys 0m0.903s 00:13:37.266 15:40:25 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.266 15:40:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 15:40:25 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:37.266 15:40:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.266 15:40:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.266 15:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 ************************************ 00:13:37.266 START TEST nvme_rpc_timeouts 00:13:37.266 ************************************ 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:37.266 * Looking for test storage... 00:13:37.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.266 15:40:25 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:37.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.266 --rc genhtml_branch_coverage=1 00:13:37.266 --rc genhtml_function_coverage=1 00:13:37.266 --rc genhtml_legend=1 00:13:37.266 --rc geninfo_all_blocks=1 00:13:37.266 --rc geninfo_unexecuted_blocks=1 00:13:37.266 00:13:37.266 ' 00:13:37.266 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:37.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.266 --rc genhtml_branch_coverage=1 00:13:37.266 --rc genhtml_function_coverage=1 00:13:37.266 --rc genhtml_legend=1 00:13:37.266 --rc geninfo_all_blocks=1 00:13:37.266 --rc geninfo_unexecuted_blocks=1 00:13:37.266 00:13:37.266 ' 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:37.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.267 --rc genhtml_branch_coverage=1 00:13:37.267 --rc genhtml_function_coverage=1 00:13:37.267 --rc genhtml_legend=1 00:13:37.267 --rc geninfo_all_blocks=1 00:13:37.267 --rc geninfo_unexecuted_blocks=1 00:13:37.267 00:13:37.267 ' 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:37.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.267 --rc genhtml_branch_coverage=1 00:13:37.267 --rc genhtml_function_coverage=1 00:13:37.267 --rc genhtml_legend=1 00:13:37.267 --rc geninfo_all_blocks=1 00:13:37.267 --rc geninfo_unexecuted_blocks=1 00:13:37.267 00:13:37.267 ' 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78813 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78813 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78845 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:37.267 15:40:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78845 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 78845 ']' 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.267 15:40:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:37.524 [2024-12-06 15:40:26.049124] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:13:37.524 [2024-12-06 15:40:26.049542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78845 ] 00:13:37.524 [2024-12-06 15:40:26.205087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.782 [2024-12-06 15:40:26.265103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.782 [2024-12-06 15:40:26.265166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.715 15:40:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.715 15:40:27 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:38.715 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:38.715 Checking default timeout settings: 00:13:38.715 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:38.974 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:38.974 Making settings changes with rpc: 00:13:38.974 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:39.232 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:39.232 Check default vs. modified settings: 00:13:39.232 15:40:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78813 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78813 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:39.798 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:39.798 Setting action_on_timeout is changed as expected. 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78813 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78813 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.799 Setting timeout_us is changed as expected. 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78813 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78813 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:39.799 Setting timeout_admin_us is changed as expected. 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78813 /tmp/settings_modified_78813 00:13:39.799 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78845 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 78845 ']' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 78845 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78845 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.799 killing process with pid 78845 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78845' 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 78845 00:13:39.799 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 78845 00:13:40.365 RPC TIMEOUT SETTING TEST PASSED. 00:13:40.365 15:40:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:40.365 ************************************ 00:13:40.365 END TEST nvme_rpc_timeouts 00:13:40.365 ************************************ 00:13:40.365 00:13:40.365 real 0m3.188s 00:13:40.365 user 0m6.347s 00:13:40.365 sys 0m0.879s 00:13:40.365 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.365 15:40:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:40.365 15:40:28 -- spdk/autotest.sh@239 -- # uname -s 00:13:40.365 15:40:28 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:40.365 15:40:28 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:40.365 15:40:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.366 15:40:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.366 15:40:28 -- common/autotest_common.sh@10 -- # set +x 00:13:40.366 ************************************ 00:13:40.366 START TEST sw_hotplug 00:13:40.366 ************************************ 00:13:40.366 15:40:28 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:40.624 * Looking for test storage... 00:13:40.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:40.624 15:40:29 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:40.624 15:40:29 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:40.624 15:40:29 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:40.624 15:40:29 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.624 15:40:29 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.625 15:40:29 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:40.625 15:40:29 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.625 15:40:29 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.625 --rc genhtml_branch_coverage=1 00:13:40.625 --rc genhtml_function_coverage=1 00:13:40.625 --rc genhtml_legend=1 00:13:40.625 --rc geninfo_all_blocks=1 00:13:40.625 --rc geninfo_unexecuted_blocks=1 00:13:40.625 00:13:40.625 ' 00:13:40.625 15:40:29 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.625 --rc genhtml_branch_coverage=1 00:13:40.625 --rc genhtml_function_coverage=1 00:13:40.625 --rc genhtml_legend=1 00:13:40.625 --rc geninfo_all_blocks=1 00:13:40.625 --rc geninfo_unexecuted_blocks=1 00:13:40.625 00:13:40.625 ' 00:13:40.625 15:40:29 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.625 --rc genhtml_branch_coverage=1 00:13:40.625 --rc genhtml_function_coverage=1 00:13:40.625 --rc genhtml_legend=1 00:13:40.625 --rc geninfo_all_blocks=1 00:13:40.625 --rc geninfo_unexecuted_blocks=1 00:13:40.625 00:13:40.625 ' 00:13:40.625 15:40:29 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:40.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.625 --rc genhtml_branch_coverage=1 00:13:40.625 --rc genhtml_function_coverage=1 00:13:40.625 --rc genhtml_legend=1 00:13:40.625 --rc geninfo_all_blocks=1 00:13:40.625 --rc geninfo_unexecuted_blocks=1 00:13:40.625 00:13:40.625 ' 00:13:40.625 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:41.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.192 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.192 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.192 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.192 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:41.192 15:40:29 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:41.192 15:40:29 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:41.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.759 Waiting for block devices as requested 00:13:42.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.017 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.017 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:42.275 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.538 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:47.538 15:40:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:47.538 15:40:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:47.794 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:47.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.794 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:48.052 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:48.309 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.309 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79705 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:48.567 15:40:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:48.567 15:40:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:48.824 Initializing NVMe Controllers 00:13:48.824 Attaching to 0000:00:10.0 00:13:48.824 Attaching to 0000:00:11.0 00:13:48.824 Attached to 0000:00:10.0 00:13:48.824 Attached to 0000:00:11.0 00:13:48.824 Initialization complete. Starting I/O... 00:13:48.824 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:48.824 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:48.824 00:13:49.813 QEMU NVMe Ctrl (12340 ): 1260 I/Os completed (+1260) 00:13:49.813 QEMU NVMe Ctrl (12341 ): 1311 I/Os completed (+1311) 00:13:49.813 00:13:50.748 QEMU NVMe Ctrl (12340 ): 2996 I/Os completed (+1736) 00:13:50.748 QEMU NVMe Ctrl (12341 ): 3092 I/Os completed (+1781) 00:13:50.748 00:13:52.125 QEMU NVMe Ctrl (12340 ): 4745 I/Os completed (+1749) 00:13:52.125 QEMU NVMe Ctrl (12341 ): 4913 I/Os completed (+1821) 00:13:52.125 00:13:53.082 QEMU NVMe Ctrl (12340 ): 6321 I/Os completed (+1576) 00:13:53.082 QEMU NVMe Ctrl (12341 ): 6567 I/Os completed (+1654) 00:13:53.082 00:13:54.018 QEMU NVMe Ctrl (12340 ): 7893 I/Os completed (+1572) 00:13:54.018 QEMU NVMe Ctrl (12341 ): 8193 I/Os completed (+1626) 00:13:54.018 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.587 [2024-12-06 15:40:43.188771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:54.587 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:54.587 [2024-12-06 15:40:43.190525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.190679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.190713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.190737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:54.587 [2024-12-06 15:40:43.192827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.192882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.192904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.192925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.587 [2024-12-06 15:40:43.216452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:54.587 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:54.587 [2024-12-06 15:40:43.218331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.218445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.218472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.218494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:54.587 [2024-12-06 15:40:43.220502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.220547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.220578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 [2024-12-06 15:40:43.220627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:54.587 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.587 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:54.587 EAL: Scan for (pci) bus failed. 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.846 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.846 Attaching to 0000:00:10.0 00:13:54.846 Attached to 0000:00:10.0 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.846 15:40:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:54.846 Attaching to 0000:00:11.0 00:13:54.846 Attached to 0000:00:11.0 00:13:55.782 QEMU NVMe Ctrl (12340 ): 1623 I/Os completed (+1623) 00:13:55.782 QEMU NVMe Ctrl (12341 ): 1487 I/Os completed (+1487) 00:13:55.782 00:13:56.717 QEMU NVMe Ctrl (12340 ): 3343 I/Os completed (+1720) 00:13:56.717 QEMU NVMe Ctrl (12341 ): 3269 I/Os completed (+1782) 00:13:56.717 00:13:58.089 QEMU NVMe Ctrl (12340 ): 5423 I/Os completed (+2080) 00:13:58.089 QEMU NVMe Ctrl (12341 ): 5361 I/Os completed (+2092) 00:13:58.089 00:13:59.023 QEMU NVMe Ctrl (12340 ): 7359 I/Os completed (+1936) 00:13:59.023 QEMU NVMe Ctrl (12341 ): 7348 I/Os completed (+1987) 00:13:59.023 00:13:59.959 QEMU NVMe Ctrl (12340 ): 9303 I/Os completed (+1944) 00:13:59.959 QEMU NVMe Ctrl (12341 ): 9347 I/Os completed (+1999) 00:13:59.959 00:14:00.896 QEMU NVMe Ctrl (12340 ): 11139 I/Os completed (+1836) 00:14:00.896 QEMU NVMe Ctrl (12341 ): 11279 I/Os completed (+1932) 00:14:00.896 00:14:01.829 QEMU NVMe Ctrl (12340 ): 13095 I/Os completed (+1956) 00:14:01.829 QEMU NVMe Ctrl (12341 ): 13263 I/Os completed (+1984) 00:14:01.829 00:14:02.765 QEMU NVMe Ctrl (12340 ): 14847 I/Os completed (+1752) 00:14:02.765 QEMU NVMe Ctrl (12341 ): 15111 I/Os completed (+1848) 00:14:02.765 00:14:04.141 QEMU NVMe Ctrl (12340 ): 16835 I/Os completed (+1988) 00:14:04.141 QEMU NVMe Ctrl (12341 ): 17129 I/Os completed (+2018) 00:14:04.141 00:14:05.077 QEMU NVMe Ctrl (12340 ): 18743 I/Os completed (+1908) 00:14:05.077 QEMU NVMe Ctrl (12341 ): 19071 I/Os completed (+1942) 00:14:05.077 00:14:06.028 QEMU NVMe Ctrl (12340 ): 20571 I/Os completed (+1828) 00:14:06.028 QEMU NVMe Ctrl (12341 ): 20938 I/Os completed (+1867) 00:14:06.028 00:14:06.986 QEMU NVMe Ctrl (12340 ): 22654 I/Os completed (+2083) 00:14:06.986 QEMU NVMe Ctrl (12341 ): 23030 I/Os completed (+2092) 00:14:06.986 00:14:06.986 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:06.986 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:06.986 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.986 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.987 [2024-12-06 15:40:55.524132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:06.987 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:06.987 [2024-12-06 15:40:55.527986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.528044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.528071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.528117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:06.987 [2024-12-06 15:40:55.530352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.530406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.530436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.530461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:06.987 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:06.987 [2024-12-06 15:40:55.565370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:06.987 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:06.987 [2024-12-06 15:40:55.567058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.567113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.567142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.567163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:06.987 [2024-12-06 15:40:55.569098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.569163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.569193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 [2024-12-06 15:40:55.569213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:06.987 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:06.987 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:07.245 Attaching to 0000:00:10.0 00:14:07.245 Attached to 0000:00:10.0 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.245 15:40:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:07.245 Attaching to 0000:00:11.0 00:14:07.245 Attached to 0000:00:11.0 00:14:07.812 QEMU NVMe Ctrl (12340 ): 1276 I/Os completed (+1276) 00:14:07.812 QEMU NVMe Ctrl (12341 ): 1047 I/Os completed (+1047) 00:14:07.812 00:14:08.746 QEMU NVMe Ctrl (12340 ): 3328 I/Os completed (+2052) 00:14:08.747 QEMU NVMe Ctrl (12341 ): 3129 I/Os completed (+2082) 00:14:08.747 00:14:10.117 QEMU NVMe Ctrl (12340 ): 5356 I/Os completed (+2028) 00:14:10.117 QEMU NVMe Ctrl (12341 ): 5182 I/Os completed (+2053) 00:14:10.117 00:14:11.052 QEMU NVMe Ctrl (12340 ): 7396 I/Os completed (+2040) 00:14:11.052 QEMU NVMe Ctrl (12341 ): 7263 I/Os completed (+2081) 00:14:11.052 00:14:11.986 QEMU NVMe Ctrl (12340 ): 9428 I/Os completed (+2032) 00:14:11.986 QEMU NVMe Ctrl (12341 ): 9320 I/Os completed (+2057) 00:14:11.986 00:14:12.929 QEMU NVMe Ctrl (12340 ): 11264 I/Os completed (+1836) 00:14:12.929 QEMU NVMe Ctrl (12341 ): 11176 I/Os completed (+1856) 00:14:12.929 00:14:13.866 QEMU NVMe Ctrl (12340 ): 13312 I/Os completed (+2048) 00:14:13.866 QEMU NVMe Ctrl (12341 ): 13252 I/Os completed (+2076) 00:14:13.866 00:14:14.802 QEMU NVMe Ctrl (12340 ): 15393 I/Os completed (+2081) 00:14:14.802 QEMU NVMe Ctrl (12341 ): 15361 I/Os completed (+2109) 00:14:14.802 00:14:15.738 QEMU NVMe Ctrl (12340 ): 17449 I/Os completed (+2056) 00:14:15.738 QEMU NVMe Ctrl (12341 ): 17466 I/Os completed (+2105) 00:14:15.738 00:14:17.114 QEMU NVMe Ctrl (12340 ): 19481 I/Os completed (+2032) 00:14:17.114 QEMU NVMe Ctrl (12341 ): 19532 I/Os completed (+2066) 00:14:17.114 00:14:18.047 QEMU NVMe Ctrl (12340 ): 21513 I/Os completed (+2032) 00:14:18.047 QEMU NVMe Ctrl (12341 ): 21576 I/Os completed (+2044) 00:14:18.047 00:14:18.984 QEMU NVMe Ctrl (12340 ): 23513 I/Os completed (+2000) 00:14:18.984 QEMU NVMe Ctrl (12341 ): 23618 I/Os completed (+2042) 00:14:18.984 00:14:19.242 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:19.242 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:19.242 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.242 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.242 [2024-12-06 15:41:07.915161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:19.242 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:19.242 [2024-12-06 15:41:07.917190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.917383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.917561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.917699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:19.242 [2024-12-06 15:41:07.920256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.920486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.920645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.242 [2024-12-06 15:41:07.920789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:19.501 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:19.501 [2024-12-06 15:41:07.945096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:19.501 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:19.501 [2024-12-06 15:41:07.946882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.947046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.947197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.947262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:19.501 [2024-12-06 15:41:07.949327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.949477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.949595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 [2024-12-06 15:41:07.949652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:19.501 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:19.501 15:41:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:19.501 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:19.760 Attaching to 0000:00:10.0 00:14:19.760 Attached to 0000:00:10.0 00:14:19.760 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:19.760 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:19.760 15:41:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:19.760 Attaching to 0000:00:11.0 00:14:19.760 Attached to 0000:00:11.0 00:14:19.760 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:19.760 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:19.760 [2024-12-06 15:41:08.298473] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:32.074 15:41:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:32.074 15:41:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:32.074 15:41:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.11 00:14:32.074 15:41:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.11 00:14:32.074 15:41:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:32.074 15:41:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.11 00:14:32.074 15:41:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.11 2 00:14:32.074 remove_attach_helper took 43.11s to complete (handling 2 nvme drive(s)) 15:41:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79705 00:14:38.691 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79705) - No such process 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79705 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80254 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:38.691 15:41:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80254 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80254 ']' 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.691 15:41:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.691 [2024-12-06 15:41:26.427047] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:14:38.691 [2024-12-06 15:41:26.430776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80254 ] 00:14:38.691 [2024-12-06 15:41:26.600036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.691 [2024-12-06 15:41:26.646374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.691 15:41:27 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.691 15:41:27 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:38.692 15:41:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:38.692 15:41:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:45.259 15:41:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.259 15:41:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:45.259 [2024-12-06 15:41:33.449871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:45.259 [2024-12-06 15:41:33.452433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.259 [2024-12-06 15:41:33.452722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.259 [2024-12-06 15:41:33.452761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.259 [2024-12-06 15:41:33.452787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.259 [2024-12-06 15:41:33.452806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.259 [2024-12-06 15:41:33.452821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.259 [2024-12-06 15:41:33.452842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.259 [2024-12-06 15:41:33.452856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.259 [2024-12-06 15:41:33.452873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.259 [2024-12-06 15:41:33.452886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.259 [2024-12-06 15:41:33.452903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.259 [2024-12-06 15:41:33.452917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.259 15:41:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:45.259 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:45.518 [2024-12-06 15:41:33.949877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:45.518 [2024-12-06 15:41:33.952475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.518 [2024-12-06 15:41:33.952754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.518 [2024-12-06 15:41:33.952786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.518 [2024-12-06 15:41:33.952810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.518 [2024-12-06 15:41:33.952826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.518 [2024-12-06 15:41:33.952844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.518 [2024-12-06 15:41:33.952859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.518 [2024-12-06 15:41:33.952883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.518 [2024-12-06 15:41:33.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.518 [2024-12-06 15:41:33.952925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.518 [2024-12-06 15:41:33.952939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.518 [2024-12-06 15:41:33.952986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:45.518 15:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:45.518 15:41:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.518 15:41:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:45.518 15:41:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.518 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:45.518 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:45.518 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:45.518 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:45.518 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:45.777 15:41:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.976 15:41:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.976 15:41:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.976 15:41:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:57.976 [2024-12-06 15:41:46.450052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:57.976 [2024-12-06 15:41:46.452701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.976 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.976 [2024-12-06 15:41:46.452867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.977 [2024-12-06 15:41:46.452918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.977 [2024-12-06 15:41:46.452969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.977 [2024-12-06 15:41:46.452993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.977 [2024-12-06 15:41:46.453009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.977 [2024-12-06 15:41:46.453028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.977 [2024-12-06 15:41:46.453043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.977 [2024-12-06 15:41:46.453068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.977 [2024-12-06 15:41:46.453084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:57.977 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.977 [2024-12-06 15:41:46.453102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.977 [2024-12-06 15:41:46.453127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.977 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.977 15:41:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.977 15:41:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.977 15:41:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.977 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:57.977 15:41:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:58.235 [2024-12-06 15:41:46.850079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:58.235 [2024-12-06 15:41:46.852993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.235 [2024-12-06 15:41:46.853270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.235 [2024-12-06 15:41:46.853501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.235 [2024-12-06 15:41:46.853779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.235 [2024-12-06 15:41:46.853835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.235 [2024-12-06 15:41:46.854050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.235 [2024-12-06 15:41:46.854204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.235 [2024-12-06 15:41:46.854354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.235 [2024-12-06 15:41:46.854506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.235 [2024-12-06 15:41:46.854773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.235 [2024-12-06 15:41:46.855000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.235 [2024-12-06 15:41:46.855163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:58.493 15:41:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.493 15:41:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.493 15:41:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.493 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.752 15:41:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.955 15:41:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.955 15:41:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.955 15:41:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.955 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.955 [2024-12-06 15:41:59.450266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:10.955 [2024-12-06 15:41:59.453018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.955 [2024-12-06 15:41:59.453220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.955 [2024-12-06 15:41:59.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.955 [2024-12-06 15:41:59.453338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.955 [2024-12-06 15:41:59.453366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.955 [2024-12-06 15:41:59.453381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.955 [2024-12-06 15:41:59.453399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.955 [2024-12-06 15:41:59.453412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.956 [2024-12-06 15:41:59.453430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.956 [2024-12-06 15:41:59.453445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.956 [2024-12-06 15:41:59.453461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.956 [2024-12-06 15:41:59.453474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.956 15:41:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.956 15:41:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.956 15:41:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:10.956 15:41:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:11.214 [2024-12-06 15:41:59.850259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:11.214 [2024-12-06 15:41:59.852833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.214 [2024-12-06 15:41:59.853059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.214 [2024-12-06 15:41:59.853091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.214 [2024-12-06 15:41:59.853116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.214 [2024-12-06 15:41:59.853132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.214 [2024-12-06 15:41:59.853152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.214 [2024-12-06 15:41:59.853183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.214 [2024-12-06 15:41:59.853203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.214 [2024-12-06 15:41:59.853229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.214 [2024-12-06 15:41:59.853246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:11.214 [2024-12-06 15:41:59.853261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.214 [2024-12-06 15:41:59.853279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:11.473 15:42:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.473 15:42:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:11.473 15:42:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:11.473 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:11.731 15:42:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.05 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.05 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:15:23.967 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:23.967 15:42:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:23.967 15:42:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.530 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.530 15:42:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.530 15:42:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.530 15:42:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.531 [2024-12-06 15:42:18.537974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:30.531 [2024-12-06 15:42:18.539623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.539661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.539685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.539708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.539725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.539739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.539756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.539769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.539787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.539801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.539816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.539845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:30.531 15:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:30.531 [2024-12-06 15:42:18.937976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:30.531 [2024-12-06 15:42:18.939625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.939671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.939691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.939713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.939728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.939744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.939759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.939775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.939788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 [2024-12-06 15:42:18.939804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.531 [2024-12-06 15:42:18.939817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.531 [2024-12-06 15:42:18.939852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.531 15:42:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.531 15:42:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.531 15:42:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:30.531 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:30.790 15:42:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.991 15:42:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.991 15:42:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.991 15:42:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:42.991 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.992 15:42:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.992 15:42:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 [2024-12-06 15:42:31.538154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:42.992 [2024-12-06 15:42:31.539935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.992 [2024-12-06 15:42:31.540005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.992 [2024-12-06 15:42:31.540033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.992 [2024-12-06 15:42:31.540056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.992 [2024-12-06 15:42:31.540075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.992 [2024-12-06 15:42:31.540090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.992 [2024-12-06 15:42:31.540106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.992 [2024-12-06 15:42:31.540119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.992 [2024-12-06 15:42:31.540135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.992 [2024-12-06 15:42:31.540174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.992 [2024-12-06 15:42:31.540192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.992 [2024-12-06 15:42:31.540206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.992 15:42:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:42.992 15:42:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:43.251 [2024-12-06 15:42:31.938148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:43.251 [2024-12-06 15:42:31.939732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.251 [2024-12-06 15:42:31.939794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.251 [2024-12-06 15:42:31.939814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.251 [2024-12-06 15:42:31.939835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.251 [2024-12-06 15:42:31.939850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.251 [2024-12-06 15:42:31.939866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.251 [2024-12-06 15:42:31.939881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.251 [2024-12-06 15:42:31.939897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.251 [2024-12-06 15:42:31.939911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.251 [2024-12-06 15:42:31.939930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:43.251 [2024-12-06 15:42:31.939943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.251 [2024-12-06 15:42:31.939973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:43.511 15:42:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.511 15:42:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:43.511 15:42:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:43.511 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:43.770 15:42:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.971 [2024-12-06 15:42:44.538271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:55.971 [2024-12-06 15:42:44.540238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.971 [2024-12-06 15:42:44.540286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.971 [2024-12-06 15:42:44.540313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.971 [2024-12-06 15:42:44.540337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.971 [2024-12-06 15:42:44.540360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.971 [2024-12-06 15:42:44.540376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.971 [2024-12-06 15:42:44.540394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.971 [2024-12-06 15:42:44.540410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.971 [2024-12-06 15:42:44.540442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.971 [2024-12-06 15:42:44.540471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.971 [2024-12-06 15:42:44.540503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.971 [2024-12-06 15:42:44.540517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:55.971 15:42:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:55.971 15:42:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:56.535 [2024-12-06 15:42:44.938248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:56.535 [2024-12-06 15:42:44.939625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.535 [2024-12-06 15:42:44.939665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.535 [2024-12-06 15:42:44.939683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.535 [2024-12-06 15:42:44.939701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.535 [2024-12-06 15:42:44.939715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.535 [2024-12-06 15:42:44.939729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.535 [2024-12-06 15:42:44.939742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.535 [2024-12-06 15:42:44.939759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.535 [2024-12-06 15:42:44.939771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.535 [2024-12-06 15:42:44.939786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:56.535 [2024-12-06 15:42:44.939798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.535 [2024-12-06 15:42:44.939812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.535 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:56.535 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:56.536 15:42:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.536 15:42:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:56.536 15:42:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:56.536 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:56.793 15:42:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.08 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.08 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.08 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.08 2 00:16:09.069 remove_attach_helper took 45.08s to complete (handling 2 nvme drive(s)) 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:09.069 15:42:57 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80254 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80254 ']' 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80254 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:09.069 15:42:57 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80254 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.070 killing process with pid 80254 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80254' 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80254 00:16:09.070 15:42:57 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80254 00:16:09.637 15:42:58 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:09.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:10.463 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:10.463 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:10.463 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.463 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.463 00:16:10.463 real 2m30.043s 00:16:10.463 user 1m50.173s 00:16:10.463 sys 0m19.619s 00:16:10.463 15:42:59 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.463 ************************************ 00:16:10.463 15:42:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 END TEST sw_hotplug 00:16:10.463 ************************************ 00:16:10.463 15:42:59 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:10.463 15:42:59 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:10.463 15:42:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.463 15:42:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.463 15:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 ************************************ 00:16:10.463 START TEST nvme_xnvme 00:16:10.463 ************************************ 00:16:10.463 15:42:59 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:10.724 * Looking for test storage... 00:16:10.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.724 15:42:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.724 --rc genhtml_branch_coverage=1 00:16:10.724 --rc genhtml_function_coverage=1 00:16:10.724 --rc genhtml_legend=1 00:16:10.724 --rc geninfo_all_blocks=1 00:16:10.724 --rc geninfo_unexecuted_blocks=1 00:16:10.724 00:16:10.724 ' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.724 --rc genhtml_branch_coverage=1 00:16:10.724 --rc genhtml_function_coverage=1 00:16:10.724 --rc genhtml_legend=1 00:16:10.724 --rc geninfo_all_blocks=1 00:16:10.724 --rc geninfo_unexecuted_blocks=1 00:16:10.724 00:16:10.724 ' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.724 --rc genhtml_branch_coverage=1 00:16:10.724 --rc genhtml_function_coverage=1 00:16:10.724 --rc genhtml_legend=1 00:16:10.724 --rc geninfo_all_blocks=1 00:16:10.724 --rc geninfo_unexecuted_blocks=1 00:16:10.724 00:16:10.724 ' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.724 --rc genhtml_branch_coverage=1 00:16:10.724 --rc genhtml_function_coverage=1 00:16:10.724 --rc genhtml_legend=1 00:16:10.724 --rc geninfo_all_blocks=1 00:16:10.724 --rc geninfo_unexecuted_blocks=1 00:16:10.724 00:16:10.724 ' 00:16:10.724 15:42:59 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:10.724 15:42:59 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:10.724 15:42:59 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:10.725 15:42:59 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:10.725 15:42:59 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:10.725 15:42:59 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:10.725 15:42:59 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:10.725 #define SPDK_CONFIG_H 00:16:10.725 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:10.725 #define SPDK_CONFIG_APPS 1 00:16:10.725 #define SPDK_CONFIG_ARCH native 00:16:10.725 #define SPDK_CONFIG_ASAN 1 00:16:10.725 #undef SPDK_CONFIG_AVAHI 00:16:10.725 #undef SPDK_CONFIG_CET 00:16:10.725 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:10.725 #define SPDK_CONFIG_COVERAGE 1 00:16:10.725 #define SPDK_CONFIG_CROSS_PREFIX 00:16:10.725 #undef SPDK_CONFIG_CRYPTO 00:16:10.725 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:10.725 #undef SPDK_CONFIG_CUSTOMOCF 00:16:10.725 #undef SPDK_CONFIG_DAOS 00:16:10.725 #define SPDK_CONFIG_DAOS_DIR 00:16:10.725 #define SPDK_CONFIG_DEBUG 1 00:16:10.725 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:10.725 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:16:10.725 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:16:10.725 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:16:10.725 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:10.725 #undef SPDK_CONFIG_DPDK_UADK 00:16:10.725 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:10.725 #define SPDK_CONFIG_EXAMPLES 1 00:16:10.725 #undef SPDK_CONFIG_FC 00:16:10.725 #define SPDK_CONFIG_FC_PATH 00:16:10.725 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:10.725 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:10.725 #define SPDK_CONFIG_FSDEV 1 00:16:10.725 #undef SPDK_CONFIG_FUSE 00:16:10.725 #undef SPDK_CONFIG_FUZZER 00:16:10.725 #define SPDK_CONFIG_FUZZER_LIB 00:16:10.725 #undef SPDK_CONFIG_GOLANG 00:16:10.725 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:10.725 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:10.725 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:10.725 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:10.726 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:10.726 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:10.726 #undef SPDK_CONFIG_HAVE_LZ4 00:16:10.726 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:10.726 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:10.726 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:10.726 #define SPDK_CONFIG_IDXD 1 00:16:10.726 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:10.726 #undef SPDK_CONFIG_IPSEC_MB 00:16:10.726 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:10.726 #define SPDK_CONFIG_ISAL 1 00:16:10.726 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:10.726 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:10.726 #define SPDK_CONFIG_LIBDIR 00:16:10.726 #undef SPDK_CONFIG_LTO 00:16:10.726 #define SPDK_CONFIG_MAX_LCORES 128 00:16:10.726 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:10.726 #define SPDK_CONFIG_NVME_CUSE 1 00:16:10.726 #undef SPDK_CONFIG_OCF 00:16:10.726 #define SPDK_CONFIG_OCF_PATH 00:16:10.726 #define SPDK_CONFIG_OPENSSL_PATH 00:16:10.726 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:10.726 #define SPDK_CONFIG_PGO_DIR 00:16:10.726 #undef SPDK_CONFIG_PGO_USE 00:16:10.726 #define SPDK_CONFIG_PREFIX /usr/local 00:16:10.726 #undef SPDK_CONFIG_RAID5F 00:16:10.726 #undef SPDK_CONFIG_RBD 00:16:10.726 #define SPDK_CONFIG_RDMA 1 00:16:10.726 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:10.726 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:10.726 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:10.726 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:10.726 #define SPDK_CONFIG_SHARED 1 00:16:10.726 #undef SPDK_CONFIG_SMA 00:16:10.726 #define SPDK_CONFIG_TESTS 1 00:16:10.726 #undef SPDK_CONFIG_TSAN 00:16:10.726 #define SPDK_CONFIG_UBLK 1 00:16:10.726 #define SPDK_CONFIG_UBSAN 1 00:16:10.726 #undef SPDK_CONFIG_UNIT_TESTS 00:16:10.726 #undef SPDK_CONFIG_URING 00:16:10.726 #define SPDK_CONFIG_URING_PATH 00:16:10.726 #undef SPDK_CONFIG_URING_ZNS 00:16:10.726 #undef SPDK_CONFIG_USDT 00:16:10.726 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:10.726 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:10.726 #undef SPDK_CONFIG_VFIO_USER 00:16:10.726 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:10.726 #define SPDK_CONFIG_VHOST 1 00:16:10.726 #define SPDK_CONFIG_VIRTIO 1 00:16:10.726 #undef SPDK_CONFIG_VTUNE 00:16:10.726 #define SPDK_CONFIG_VTUNE_DIR 00:16:10.726 #define SPDK_CONFIG_WERROR 1 00:16:10.726 #define SPDK_CONFIG_WPDK_DIR 00:16:10.726 #define SPDK_CONFIG_XNVME 1 00:16:10.726 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:10.726 15:42:59 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.726 15:42:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.726 15:42:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.726 15:42:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.726 15:42:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.726 15:42:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.726 15:42:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.726 15:42:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.726 15:42:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:10.726 15:42:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:10.726 15:42:59 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:10.726 15:42:59 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:10.727 15:42:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81577 ]] 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81577 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.agApJb 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.agApJb/tests/xnvme /tmp/spdk.agApJb 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13368840192 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6213693440 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6262992896 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13368840192 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6213693440 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97139757056 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2563022848 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:10.728 * Looking for test storage... 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.728 15:42:59 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13368840192 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:10.988 15:42:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:10.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.989 --rc genhtml_branch_coverage=1 00:16:10.989 --rc genhtml_function_coverage=1 00:16:10.989 --rc genhtml_legend=1 00:16:10.989 --rc geninfo_all_blocks=1 00:16:10.989 --rc geninfo_unexecuted_blocks=1 00:16:10.989 00:16:10.989 ' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:10.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.989 --rc genhtml_branch_coverage=1 00:16:10.989 --rc genhtml_function_coverage=1 00:16:10.989 --rc genhtml_legend=1 00:16:10.989 --rc geninfo_all_blocks=1 00:16:10.989 --rc geninfo_unexecuted_blocks=1 00:16:10.989 00:16:10.989 ' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:10.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.989 --rc genhtml_branch_coverage=1 00:16:10.989 --rc genhtml_function_coverage=1 00:16:10.989 --rc genhtml_legend=1 00:16:10.989 --rc geninfo_all_blocks=1 00:16:10.989 --rc geninfo_unexecuted_blocks=1 00:16:10.989 00:16:10.989 ' 00:16:10.989 15:42:59 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:10.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.989 --rc genhtml_branch_coverage=1 00:16:10.989 --rc genhtml_function_coverage=1 00:16:10.989 --rc genhtml_legend=1 00:16:10.989 --rc geninfo_all_blocks=1 00:16:10.989 --rc geninfo_unexecuted_blocks=1 00:16:10.989 00:16:10.989 ' 00:16:10.989 15:42:59 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.989 15:42:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.989 15:42:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.989 15:42:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.989 15:42:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.989 15:42:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:10.989 15:42:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:10.989 15:42:59 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:11.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:11.507 Waiting for block devices as requested 00:16:11.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.792 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.792 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.792 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:17.052 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:17.052 15:43:05 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:17.310 15:43:05 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:17.310 15:43:05 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:17.568 15:43:06 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:17.568 No valid GPT data, bailing 00:16:17.568 15:43:06 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:17.568 15:43:06 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:17.568 15:43:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:17.568 15:43:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.568 15:43:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.568 15:43:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.568 ************************************ 00:16:17.568 START TEST xnvme_rpc 00:16:17.568 ************************************ 00:16:17.568 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:17.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81964 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81964 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81964 ']' 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.569 15:43:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.827 [2024-12-06 15:43:06.292031] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:17.827 [2024-12-06 15:43:06.292453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81964 ] 00:16:17.827 [2024-12-06 15:43:06.445918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.827 [2024-12-06 15:43:06.488761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 xnvme_bdev 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.762 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81964 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81964 ']' 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81964 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81964 00:16:19.021 killing process with pid 81964 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81964' 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81964 00:16:19.021 15:43:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81964 00:16:19.615 00:16:19.615 real 0m1.874s 00:16:19.615 user 0m2.039s 00:16:19.615 sys 0m0.540s 00:16:19.615 15:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.615 ************************************ 00:16:19.615 END TEST xnvme_rpc 00:16:19.615 ************************************ 00:16:19.615 15:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.615 15:43:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:19.615 15:43:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:19.615 15:43:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.615 15:43:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.615 ************************************ 00:16:19.615 START TEST xnvme_bdevperf 00:16:19.615 ************************************ 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:19.615 15:43:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:19.615 { 00:16:19.615 "subsystems": [ 00:16:19.615 { 00:16:19.615 "subsystem": "bdev", 00:16:19.615 "config": [ 00:16:19.615 { 00:16:19.615 "params": { 00:16:19.615 "io_mechanism": "libaio", 00:16:19.615 "conserve_cpu": false, 00:16:19.615 "filename": "/dev/nvme0n1", 00:16:19.615 "name": "xnvme_bdev" 00:16:19.615 }, 00:16:19.615 "method": "bdev_xnvme_create" 00:16:19.615 }, 00:16:19.615 { 00:16:19.615 "method": "bdev_wait_for_examine" 00:16:19.615 } 00:16:19.615 ] 00:16:19.615 } 00:16:19.615 ] 00:16:19.615 } 00:16:19.615 [2024-12-06 15:43:08.196450] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:19.615 [2024-12-06 15:43:08.196616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82027 ] 00:16:19.874 [2024-12-06 15:43:08.352197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.874 [2024-12-06 15:43:08.388917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.874 Running I/O for 5 seconds... 00:16:22.187 23967.00 IOPS, 93.62 MiB/s [2024-12-06T15:43:11.816Z] 23360.00 IOPS, 91.25 MiB/s [2024-12-06T15:43:12.752Z] 23490.00 IOPS, 91.76 MiB/s [2024-12-06T15:43:13.688Z] 23962.75 IOPS, 93.60 MiB/s 00:16:24.995 Latency(us) 00:16:24.995 [2024-12-06T15:43:13.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.995 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:24.995 xnvme_bdev : 5.00 23918.25 93.43 0.00 0.00 2669.68 435.67 9711.24 00:16:24.995 [2024-12-06T15:43:13.688Z] =================================================================================================================== 00:16:24.995 [2024-12-06T15:43:13.688Z] Total : 23918.25 93.43 0.00 0.00 2669.68 435.67 9711.24 00:16:25.255 15:43:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:25.255 15:43:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:25.255 15:43:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:25.255 15:43:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:25.255 15:43:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:25.255 { 00:16:25.255 "subsystems": [ 00:16:25.255 { 00:16:25.255 "subsystem": "bdev", 00:16:25.255 "config": [ 00:16:25.255 { 00:16:25.255 "params": { 00:16:25.255 "io_mechanism": "libaio", 00:16:25.255 "conserve_cpu": false, 00:16:25.255 "filename": "/dev/nvme0n1", 00:16:25.255 "name": "xnvme_bdev" 00:16:25.255 }, 00:16:25.255 "method": "bdev_xnvme_create" 00:16:25.255 }, 00:16:25.255 { 00:16:25.255 "method": "bdev_wait_for_examine" 00:16:25.255 } 00:16:25.255 ] 00:16:25.255 } 00:16:25.255 ] 00:16:25.255 } 00:16:25.255 [2024-12-06 15:43:13.916732] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:25.255 [2024-12-06 15:43:13.916917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82097 ] 00:16:25.514 [2024-12-06 15:43:14.070415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.514 [2024-12-06 15:43:14.105962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.773 Running I/O for 5 seconds... 00:16:27.641 22238.00 IOPS, 86.87 MiB/s [2024-12-06T15:43:17.270Z] 21424.50 IOPS, 83.69 MiB/s [2024-12-06T15:43:18.646Z] 21185.33 IOPS, 82.76 MiB/s [2024-12-06T15:43:19.582Z] 22240.00 IOPS, 86.88 MiB/s 00:16:30.889 Latency(us) 00:16:30.889 [2024-12-06T15:43:19.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.889 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:30.889 xnvme_bdev : 5.00 23080.08 90.16 0.00 0.00 2766.70 269.96 5570.56 00:16:30.889 [2024-12-06T15:43:19.582Z] =================================================================================================================== 00:16:30.889 [2024-12-06T15:43:19.582Z] Total : 23080.08 90.16 0.00 0.00 2766.70 269.96 5570.56 00:16:30.889 ************************************ 00:16:30.889 END TEST xnvme_bdevperf 00:16:30.889 ************************************ 00:16:30.889 00:16:30.889 real 0m11.455s 00:16:30.889 user 0m2.638s 00:16:30.889 sys 0m6.437s 00:16:30.889 15:43:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.889 15:43:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.148 15:43:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:31.148 15:43:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.148 15:43:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.148 15:43:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.148 ************************************ 00:16:31.148 START TEST xnvme_fio_plugin 00:16:31.148 ************************************ 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.148 15:43:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:31.148 { 00:16:31.148 "subsystems": [ 00:16:31.148 { 00:16:31.148 "subsystem": "bdev", 00:16:31.148 "config": [ 00:16:31.148 { 00:16:31.148 "params": { 00:16:31.148 "io_mechanism": "libaio", 00:16:31.148 "conserve_cpu": false, 00:16:31.148 "filename": "/dev/nvme0n1", 00:16:31.148 "name": "xnvme_bdev" 00:16:31.148 }, 00:16:31.148 "method": "bdev_xnvme_create" 00:16:31.148 }, 00:16:31.148 { 00:16:31.148 "method": "bdev_wait_for_examine" 00:16:31.148 } 00:16:31.148 ] 00:16:31.148 } 00:16:31.148 ] 00:16:31.148 } 00:16:31.148 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:31.148 fio-3.35 00:16:31.148 Starting 1 thread 00:16:37.710 00:16:37.710 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82205: Fri Dec 6 15:43:25 2024 00:16:37.710 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(530MiB/5001msec) 00:16:37.710 slat (usec): min=4, max=5473, avg=32.96, stdev=39.00 00:16:37.710 clat (usec): min=100, max=9711, avg=1306.02, stdev=714.20 00:16:37.710 lat (usec): min=136, max=9759, avg=1338.98, stdev=716.66 00:16:37.710 clat percentiles (usec): 00:16:37.710 | 1.00th=[ 243], 5.00th=[ 355], 10.00th=[ 457], 20.00th=[ 652], 00:16:37.710 | 30.00th=[ 840], 40.00th=[ 1037], 50.00th=[ 1221], 60.00th=[ 1418], 00:16:37.710 | 70.00th=[ 1631], 80.00th=[ 1876], 90.00th=[ 2212], 95.00th=[ 2540], 00:16:37.710 | 99.00th=[ 3359], 99.50th=[ 3720], 99.90th=[ 4686], 99.95th=[ 5145], 00:16:37.710 | 99.99th=[ 9241] 00:16:37.710 bw ( KiB/s): min=97640, max=117664, per=100.00%, avg=108557.33, stdev=5735.35, samples=9 00:16:37.710 iops : min=24410, max=29416, avg=27139.33, stdev=1433.84, samples=9 00:16:37.710 lat (usec) : 250=1.17%, 500=11.00%, 750=13.03%, 1000=12.97% 00:16:37.710 lat (msec) : 2=45.90%, 4=15.63%, 10=0.31% 00:16:37.710 cpu : usr=22.40%, sys=59.16%, ctx=121, majf=0, minf=962 00:16:37.710 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=12.0%, 16=26.4%, 32=53.9%, >=64=1.7% 00:16:37.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.710 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:37.710 issued rwts: total=135607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.710 00:16:37.710 Run status group 0 (all jobs): 00:16:37.710 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=530MiB (555MB), run=5001-5001msec 00:16:37.710 ----------------------------------------------------- 00:16:37.710 Suppressions used: 00:16:37.711 count bytes template 00:16:37.711 1 11 /usr/src/fio/parse.c 00:16:37.711 1 8 libtcmalloc_minimal.so 00:16:37.711 1 904 libcrypto.so 00:16:37.711 ----------------------------------------------------- 00:16:37.711 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:37.711 15:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:37.711 { 00:16:37.711 "subsystems": [ 00:16:37.711 { 00:16:37.711 "subsystem": "bdev", 00:16:37.711 "config": [ 00:16:37.711 { 00:16:37.711 "params": { 00:16:37.711 "io_mechanism": "libaio", 00:16:37.711 "conserve_cpu": false, 00:16:37.711 "filename": "/dev/nvme0n1", 00:16:37.711 "name": "xnvme_bdev" 00:16:37.711 }, 00:16:37.711 "method": "bdev_xnvme_create" 00:16:37.711 }, 00:16:37.711 { 00:16:37.711 "method": "bdev_wait_for_examine" 00:16:37.711 } 00:16:37.711 ] 00:16:37.711 } 00:16:37.711 ] 00:16:37.711 } 00:16:37.711 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:37.711 fio-3.35 00:16:37.711 Starting 1 thread 00:16:42.994 00:16:42.994 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82286: Fri Dec 6 15:43:31 2024 00:16:42.994 write: IOPS=27.0k, BW=105MiB/s (110MB/s)(527MiB/5001msec); 0 zone resets 00:16:42.994 slat (usec): min=4, max=977, avg=32.97, stdev=39.16 00:16:42.994 clat (usec): min=104, max=6077, avg=1328.74, stdev=705.94 00:16:42.994 lat (usec): min=125, max=6098, avg=1361.71, stdev=707.79 00:16:42.994 clat percentiles (usec): 00:16:42.994 | 1.00th=[ 262], 5.00th=[ 375], 10.00th=[ 474], 20.00th=[ 668], 00:16:42.994 | 30.00th=[ 857], 40.00th=[ 1045], 50.00th=[ 1254], 60.00th=[ 1450], 00:16:42.994 | 70.00th=[ 1663], 80.00th=[ 1926], 90.00th=[ 2278], 95.00th=[ 2573], 00:16:42.994 | 99.00th=[ 3228], 99.50th=[ 3523], 99.90th=[ 4490], 99.95th=[ 4817], 00:16:42.994 | 99.99th=[ 5407] 00:16:42.994 bw ( KiB/s): min=93704, max=129544, per=100.00%, avg=108238.22, stdev=13764.55, samples=9 00:16:42.994 iops : min=23426, max=32386, avg=27059.56, stdev=3441.14, samples=9 00:16:42.994 lat (usec) : 250=0.81%, 500=10.62%, 750=12.98%, 1000=13.09% 00:16:42.994 lat (msec) : 2=44.99%, 4=17.27%, 10=0.22% 00:16:42.994 cpu : usr=23.34%, sys=59.30%, ctx=78, majf=0, minf=1066 00:16:42.994 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=12.1%, 16=26.5%, 32=53.9%, >=64=1.7% 00:16:42.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.994 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:42.994 issued rwts: total=0,134865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.994 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:42.994 00:16:42.994 Run status group 0 (all jobs): 00:16:42.994 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=527MiB (552MB), run=5001-5001msec 00:16:43.560 ----------------------------------------------------- 00:16:43.560 Suppressions used: 00:16:43.560 count bytes template 00:16:43.560 1 11 /usr/src/fio/parse.c 00:16:43.560 1 8 libtcmalloc_minimal.so 00:16:43.560 1 904 libcrypto.so 00:16:43.560 ----------------------------------------------------- 00:16:43.560 00:16:43.560 00:16:43.560 real 0m12.554s 00:16:43.560 user 0m3.800s 00:16:43.560 sys 0m6.641s 00:16:43.560 15:43:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.560 15:43:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.560 ************************************ 00:16:43.560 END TEST xnvme_fio_plugin 00:16:43.560 ************************************ 00:16:43.560 15:43:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:43.560 15:43:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:43.560 15:43:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:43.560 15:43:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:43.560 15:43:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.560 15:43:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.560 15:43:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.560 ************************************ 00:16:43.560 START TEST xnvme_rpc 00:16:43.560 ************************************ 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82367 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82367 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82367 ']' 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.560 15:43:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.819 [2024-12-06 15:43:32.316065] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:43.819 [2024-12-06 15:43:32.316895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82367 ] 00:16:43.819 [2024-12-06 15:43:32.467407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.077 [2024-12-06 15:43:32.522047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 xnvme_bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82367 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82367 ']' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82367 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82367 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.014 killing process with pid 82367 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82367' 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82367 00:16:45.014 15:43:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82367 00:16:45.582 00:16:45.582 real 0m1.953s 00:16:45.582 user 0m2.193s 00:16:45.582 sys 0m0.522s 00:16:45.582 15:43:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.582 15:43:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.582 ************************************ 00:16:45.582 END TEST xnvme_rpc 00:16:45.582 ************************************ 00:16:45.582 15:43:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:45.582 15:43:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.582 15:43:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.582 15:43:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.582 ************************************ 00:16:45.582 START TEST xnvme_bdevperf 00:16:45.582 ************************************ 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:45.582 15:43:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.841 { 00:16:45.841 "subsystems": [ 00:16:45.841 { 00:16:45.841 "subsystem": "bdev", 00:16:45.841 "config": [ 00:16:45.841 { 00:16:45.841 "params": { 00:16:45.841 "io_mechanism": "libaio", 00:16:45.841 "conserve_cpu": true, 00:16:45.841 "filename": "/dev/nvme0n1", 00:16:45.841 "name": "xnvme_bdev" 00:16:45.841 }, 00:16:45.841 "method": "bdev_xnvme_create" 00:16:45.841 }, 00:16:45.841 { 00:16:45.841 "method": "bdev_wait_for_examine" 00:16:45.841 } 00:16:45.841 ] 00:16:45.841 } 00:16:45.841 ] 00:16:45.841 } 00:16:45.841 [2024-12-06 15:43:34.329044] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:45.842 [2024-12-06 15:43:34.329246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82430 ] 00:16:45.842 [2024-12-06 15:43:34.485118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.842 [2024-12-06 15:43:34.524973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.101 Running I/O for 5 seconds... 00:16:48.409 26217.00 IOPS, 102.41 MiB/s [2024-12-06T15:43:38.036Z] 26171.50 IOPS, 102.23 MiB/s [2024-12-06T15:43:38.973Z] 24737.33 IOPS, 96.63 MiB/s [2024-12-06T15:43:39.908Z] 23959.25 IOPS, 93.59 MiB/s 00:16:51.215 Latency(us) 00:16:51.215 [2024-12-06T15:43:39.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.215 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:51.215 xnvme_bdev : 5.00 23516.44 91.86 0.00 0.00 2715.29 240.17 8400.52 00:16:51.215 [2024-12-06T15:43:39.908Z] =================================================================================================================== 00:16:51.215 [2024-12-06T15:43:39.908Z] Total : 23516.44 91.86 0.00 0.00 2715.29 240.17 8400.52 00:16:51.473 15:43:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:51.473 15:43:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:51.473 15:43:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:51.473 15:43:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:51.473 15:43:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:51.473 { 00:16:51.473 "subsystems": [ 00:16:51.473 { 00:16:51.473 "subsystem": "bdev", 00:16:51.473 "config": [ 00:16:51.473 { 00:16:51.473 "params": { 00:16:51.473 "io_mechanism": "libaio", 00:16:51.473 "conserve_cpu": true, 00:16:51.473 "filename": "/dev/nvme0n1", 00:16:51.473 "name": "xnvme_bdev" 00:16:51.473 }, 00:16:51.473 "method": "bdev_xnvme_create" 00:16:51.473 }, 00:16:51.473 { 00:16:51.473 "method": "bdev_wait_for_examine" 00:16:51.473 } 00:16:51.473 ] 00:16:51.473 } 00:16:51.473 ] 00:16:51.473 } 00:16:51.473 [2024-12-06 15:43:40.067487] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:51.473 [2024-12-06 15:43:40.068179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82494 ] 00:16:51.731 [2024-12-06 15:43:40.231175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.731 [2024-12-06 15:43:40.275307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.731 Running I/O for 5 seconds... 00:16:54.076 19626.00 IOPS, 76.66 MiB/s [2024-12-06T15:43:43.705Z] 19734.50 IOPS, 77.09 MiB/s [2024-12-06T15:43:44.642Z] 19632.00 IOPS, 76.69 MiB/s [2024-12-06T15:43:45.578Z] 19673.00 IOPS, 76.85 MiB/s 00:16:56.885 Latency(us) 00:16:56.885 [2024-12-06T15:43:45.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.885 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:56.885 xnvme_bdev : 5.01 19677.43 76.86 0.00 0.00 3245.13 290.44 9055.88 00:16:56.885 [2024-12-06T15:43:45.578Z] =================================================================================================================== 00:16:56.885 [2024-12-06T15:43:45.578Z] Total : 19677.43 76.86 0.00 0.00 3245.13 290.44 9055.88 00:16:57.144 00:16:57.144 real 0m11.473s 00:16:57.144 user 0m2.593s 00:16:57.144 sys 0m6.326s 00:16:57.144 15:43:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.144 15:43:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:57.144 ************************************ 00:16:57.144 END TEST xnvme_bdevperf 00:16:57.144 ************************************ 00:16:57.144 15:43:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:57.144 15:43:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.144 15:43:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.144 15:43:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.144 ************************************ 00:16:57.144 START TEST xnvme_fio_plugin 00:16:57.144 ************************************ 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.144 15:43:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.144 { 00:16:57.144 "subsystems": [ 00:16:57.144 { 00:16:57.144 "subsystem": "bdev", 00:16:57.144 "config": [ 00:16:57.144 { 00:16:57.144 "params": { 00:16:57.144 "io_mechanism": "libaio", 00:16:57.144 "conserve_cpu": true, 00:16:57.144 "filename": "/dev/nvme0n1", 00:16:57.144 "name": "xnvme_bdev" 00:16:57.144 }, 00:16:57.144 "method": "bdev_xnvme_create" 00:16:57.144 }, 00:16:57.144 { 00:16:57.144 "method": "bdev_wait_for_examine" 00:16:57.144 } 00:16:57.144 ] 00:16:57.144 } 00:16:57.144 ] 00:16:57.144 } 00:16:57.402 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:57.402 fio-3.35 00:16:57.402 Starting 1 thread 00:17:03.966 00:17:03.966 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82610: Fri Dec 6 15:43:51 2024 00:17:03.966 read: IOPS=33.9k, BW=132MiB/s (139MB/s)(662MiB/5001msec) 00:17:03.966 slat (usec): min=4, max=889, avg=26.43, stdev=34.03 00:17:03.966 clat (usec): min=91, max=6314, avg=1060.86, stdev=683.45 00:17:03.966 lat (usec): min=150, max=6392, avg=1087.29, stdev=689.02 00:17:03.966 clat percentiles (usec): 00:17:03.966 | 1.00th=[ 198], 5.00th=[ 289], 10.00th=[ 367], 20.00th=[ 502], 00:17:03.966 | 30.00th=[ 635], 40.00th=[ 775], 50.00th=[ 914], 60.00th=[ 1057], 00:17:03.966 | 70.00th=[ 1221], 80.00th=[ 1467], 90.00th=[ 2008], 95.00th=[ 2540], 00:17:03.966 | 99.00th=[ 3261], 99.50th=[ 3523], 99.90th=[ 4228], 99.95th=[ 4621], 00:17:03.966 | 99.99th=[ 5407] 00:17:03.966 bw ( KiB/s): min=78232, max=176192, per=100.00%, avg=141283.11, stdev=32933.39, samples=9 00:17:03.966 iops : min=19558, max=44048, avg=35320.78, stdev=8233.35, samples=9 00:17:03.966 lat (usec) : 100=0.06%, 250=2.71%, 500=17.20%, 750=17.99%, 1000=18.19% 00:17:03.966 lat (msec) : 2=33.74%, 4=9.94%, 10=0.17% 00:17:03.966 cpu : usr=22.44%, sys=59.98%, ctx=83, majf=0, minf=1065 00:17:03.966 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=11.1%, 16=26.3%, 32=55.5%, >=64=1.8% 00:17:03.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.966 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:03.966 issued rwts: total=169364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.966 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:03.966 00:17:03.966 Run status group 0 (all jobs): 00:17:03.966 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=662MiB (694MB), run=5001-5001msec 00:17:03.966 ----------------------------------------------------- 00:17:03.966 Suppressions used: 00:17:03.966 count bytes template 00:17:03.966 1 11 /usr/src/fio/parse.c 00:17:03.966 1 8 libtcmalloc_minimal.so 00:17:03.966 1 904 libcrypto.so 00:17:03.966 ----------------------------------------------------- 00:17:03.966 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:03.966 15:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:03.966 { 00:17:03.966 "subsystems": [ 00:17:03.966 { 00:17:03.966 "subsystem": "bdev", 00:17:03.966 "config": [ 00:17:03.966 { 00:17:03.966 "params": { 00:17:03.966 "io_mechanism": "libaio", 00:17:03.966 "conserve_cpu": true, 00:17:03.966 "filename": "/dev/nvme0n1", 00:17:03.966 "name": "xnvme_bdev" 00:17:03.966 }, 00:17:03.966 "method": "bdev_xnvme_create" 00:17:03.966 }, 00:17:03.966 { 00:17:03.966 "method": "bdev_wait_for_examine" 00:17:03.966 } 00:17:03.966 ] 00:17:03.966 } 00:17:03.966 ] 00:17:03.966 } 00:17:03.966 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:03.966 fio-3.35 00:17:03.966 Starting 1 thread 00:17:09.237 00:17:09.237 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82695: Fri Dec 6 15:43:57 2024 00:17:09.237 write: IOPS=33.1k, BW=129MiB/s (135MB/s)(646MiB/5001msec); 0 zone resets 00:17:09.237 slat (usec): min=4, max=768, avg=26.92, stdev=34.66 00:17:09.237 clat (usec): min=62, max=5966, avg=1092.78, stdev=682.17 00:17:09.237 lat (usec): min=160, max=6059, avg=1119.70, stdev=687.61 00:17:09.237 clat percentiles (usec): 00:17:09.237 | 1.00th=[ 217], 5.00th=[ 314], 10.00th=[ 388], 20.00th=[ 529], 00:17:09.237 | 30.00th=[ 668], 40.00th=[ 807], 50.00th=[ 938], 60.00th=[ 1090], 00:17:09.237 | 70.00th=[ 1270], 80.00th=[ 1532], 90.00th=[ 2057], 95.00th=[ 2540], 00:17:09.237 | 99.00th=[ 3261], 99.50th=[ 3523], 99.90th=[ 4293], 99.95th=[ 4621], 00:17:09.237 | 99.99th=[ 5145] 00:17:09.237 bw ( KiB/s): min=82435, max=194336, per=100.00%, avg=132597.67, stdev=44631.95, samples=9 00:17:09.237 iops : min=20608, max=48584, avg=33149.33, stdev=11158.09, samples=9 00:17:09.237 lat (usec) : 100=0.03%, 250=1.94%, 500=15.97%, 750=18.01%, 1000=18.42% 00:17:09.237 lat (msec) : 2=34.96%, 4=10.50%, 10=0.17% 00:17:09.237 cpu : usr=25.10%, sys=57.40%, ctx=83, majf=0, minf=1066 00:17:09.237 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=10.8%, 16=26.3%, 32=56.1%, >=64=1.8% 00:17:09.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.237 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:09.237 issued rwts: total=0,165350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.237 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:09.237 00:17:09.237 Run status group 0 (all jobs): 00:17:09.237 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=646MiB (677MB), run=5001-5001msec 00:17:09.803 ----------------------------------------------------- 00:17:09.803 Suppressions used: 00:17:09.803 count bytes template 00:17:09.803 1 11 /usr/src/fio/parse.c 00:17:09.803 1 8 libtcmalloc_minimal.so 00:17:09.803 1 904 libcrypto.so 00:17:09.803 ----------------------------------------------------- 00:17:09.803 00:17:09.803 00:17:09.803 real 0m12.632s 00:17:09.803 user 0m3.963s 00:17:09.803 sys 0m6.570s 00:17:09.803 15:43:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.803 15:43:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 ************************************ 00:17:09.803 END TEST xnvme_fio_plugin 00:17:09.803 ************************************ 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:09.803 15:43:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:09.803 15:43:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.803 15:43:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.803 15:43:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 ************************************ 00:17:09.803 START TEST xnvme_rpc 00:17:09.803 ************************************ 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82771 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82771 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82771 ']' 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.803 15:43:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.061 [2024-12-06 15:43:58.572779] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:10.061 [2024-12-06 15:43:58.572992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82771 ] 00:17:10.061 [2024-12-06 15:43:58.733038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.319 [2024-12-06 15:43:58.783294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.886 xnvme_bdev 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.886 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.144 15:43:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82771 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82771 ']' 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82771 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82771 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.145 killing process with pid 82771 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82771' 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82771 00:17:11.145 15:43:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82771 00:17:11.712 00:17:11.712 real 0m1.868s 00:17:11.712 user 0m2.061s 00:17:11.712 sys 0m0.539s 00:17:11.712 15:44:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.712 15:44:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.712 ************************************ 00:17:11.712 END TEST xnvme_rpc 00:17:11.712 ************************************ 00:17:11.712 15:44:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:11.712 15:44:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.712 15:44:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.712 15:44:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.712 ************************************ 00:17:11.712 START TEST xnvme_bdevperf 00:17:11.712 ************************************ 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:11.712 15:44:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 { 00:17:11.971 "subsystems": [ 00:17:11.971 { 00:17:11.971 "subsystem": "bdev", 00:17:11.971 "config": [ 00:17:11.971 { 00:17:11.971 "params": { 00:17:11.971 "io_mechanism": "io_uring", 00:17:11.971 "conserve_cpu": false, 00:17:11.971 "filename": "/dev/nvme0n1", 00:17:11.971 "name": "xnvme_bdev" 00:17:11.971 }, 00:17:11.971 "method": "bdev_xnvme_create" 00:17:11.971 }, 00:17:11.971 { 00:17:11.971 "method": "bdev_wait_for_examine" 00:17:11.971 } 00:17:11.971 ] 00:17:11.971 } 00:17:11.971 ] 00:17:11.971 } 00:17:11.971 [2024-12-06 15:44:00.479796] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:11.971 [2024-12-06 15:44:00.480018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82829 ] 00:17:11.971 [2024-12-06 15:44:00.635094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.229 [2024-12-06 15:44:00.674206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.229 Running I/O for 5 seconds... 00:17:14.537 54080.00 IOPS, 211.25 MiB/s [2024-12-06T15:44:04.167Z] 53920.00 IOPS, 210.62 MiB/s [2024-12-06T15:44:05.104Z] 53930.67 IOPS, 210.67 MiB/s [2024-12-06T15:44:06.038Z] 54160.00 IOPS, 211.56 MiB/s 00:17:17.345 Latency(us) 00:17:17.345 [2024-12-06T15:44:06.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.345 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:17.346 xnvme_bdev : 5.00 53921.51 210.63 0.00 0.00 1183.46 830.37 3530.01 00:17:17.346 [2024-12-06T15:44:06.039Z] =================================================================================================================== 00:17:17.346 [2024-12-06T15:44:06.039Z] Total : 53921.51 210.63 0.00 0.00 1183.46 830.37 3530.01 00:17:17.604 15:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.604 15:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:17.604 15:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:17.604 15:44:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:17.604 15:44:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:17.604 { 00:17:17.604 "subsystems": [ 00:17:17.604 { 00:17:17.604 "subsystem": "bdev", 00:17:17.604 "config": [ 00:17:17.604 { 00:17:17.604 "params": { 00:17:17.604 "io_mechanism": "io_uring", 00:17:17.604 "conserve_cpu": false, 00:17:17.604 "filename": "/dev/nvme0n1", 00:17:17.604 "name": "xnvme_bdev" 00:17:17.604 }, 00:17:17.604 "method": "bdev_xnvme_create" 00:17:17.604 }, 00:17:17.604 { 00:17:17.604 "method": "bdev_wait_for_examine" 00:17:17.604 } 00:17:17.604 ] 00:17:17.604 } 00:17:17.604 ] 00:17:17.604 } 00:17:17.604 [2024-12-06 15:44:06.171695] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:17.604 [2024-12-06 15:44:06.171894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82899 ] 00:17:17.863 [2024-12-06 15:44:06.322481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.863 [2024-12-06 15:44:06.359665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.863 Running I/O for 5 seconds... 00:17:20.178 50560.00 IOPS, 197.50 MiB/s [2024-12-06T15:44:09.808Z] 48096.00 IOPS, 187.88 MiB/s [2024-12-06T15:44:10.755Z] 45098.67 IOPS, 176.17 MiB/s [2024-12-06T15:44:11.734Z] 44080.00 IOPS, 172.19 MiB/s 00:17:23.041 Latency(us) 00:17:23.041 [2024-12-06T15:44:11.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.041 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:23.041 xnvme_bdev : 5.00 44054.18 172.09 0.00 0.00 1448.07 882.50 5928.03 00:17:23.041 [2024-12-06T15:44:11.734Z] =================================================================================================================== 00:17:23.041 [2024-12-06T15:44:11.734Z] Total : 44054.18 172.09 0.00 0.00 1448.07 882.50 5928.03 00:17:23.300 00:17:23.300 real 0m11.385s 00:17:23.300 user 0m4.173s 00:17:23.300 sys 0m7.019s 00:17:23.300 15:44:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.300 15:44:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.300 ************************************ 00:17:23.300 END TEST xnvme_bdevperf 00:17:23.300 ************************************ 00:17:23.300 15:44:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:23.300 15:44:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:23.300 15:44:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.300 15:44:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.300 ************************************ 00:17:23.300 START TEST xnvme_fio_plugin 00:17:23.300 ************************************ 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.300 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.301 15:44:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.301 { 00:17:23.301 "subsystems": [ 00:17:23.301 { 00:17:23.301 "subsystem": "bdev", 00:17:23.301 "config": [ 00:17:23.301 { 00:17:23.301 "params": { 00:17:23.301 "io_mechanism": "io_uring", 00:17:23.301 "conserve_cpu": false, 00:17:23.301 "filename": "/dev/nvme0n1", 00:17:23.301 "name": "xnvme_bdev" 00:17:23.301 }, 00:17:23.301 "method": "bdev_xnvme_create" 00:17:23.301 }, 00:17:23.301 { 00:17:23.301 "method": "bdev_wait_for_examine" 00:17:23.301 } 00:17:23.301 ] 00:17:23.301 } 00:17:23.301 ] 00:17:23.301 } 00:17:23.560 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.560 fio-3.35 00:17:23.560 Starting 1 thread 00:17:30.125 00:17:30.125 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83011: Fri Dec 6 15:44:17 2024 00:17:30.125 read: IOPS=46.8k, BW=183MiB/s (191MB/s)(914MiB/5002msec) 00:17:30.125 slat (usec): min=2, max=124, avg= 3.36, stdev= 2.31 00:17:30.125 clat (usec): min=158, max=6559, avg=1232.79, stdev=179.69 00:17:30.125 lat (usec): min=178, max=6562, avg=1236.15, stdev=179.94 00:17:30.125 clat percentiles (usec): 00:17:30.125 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1123], 00:17:30.125 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1237], 00:17:30.125 | 70.00th=[ 1270], 80.00th=[ 1319], 90.00th=[ 1385], 95.00th=[ 1467], 00:17:30.125 | 99.00th=[ 1713], 99.50th=[ 1958], 99.90th=[ 3654], 99.95th=[ 3884], 00:17:30.125 | 99.99th=[ 5014] 00:17:30.125 bw ( KiB/s): min=167089, max=196096, per=99.60%, avg=186256.11, stdev=9549.26, samples=9 00:17:30.125 iops : min=41772, max=49024, avg=46564.00, stdev=2387.38, samples=9 00:17:30.125 lat (usec) : 250=0.01%, 500=0.02%, 750=0.04%, 1000=0.97% 00:17:30.125 lat (msec) : 2=98.47%, 4=0.46%, 10=0.03% 00:17:30.126 cpu : usr=28.27%, sys=70.51%, ctx=10, majf=0, minf=1063 00:17:30.126 IO depths : 1=1.4%, 2=2.9%, 4=6.1%, 8=12.5%, 16=25.1%, 32=50.4%, >=64=1.6% 00:17:30.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.126 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:30.126 issued rwts: total=233857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.126 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:30.126 00:17:30.126 Run status group 0 (all jobs): 00:17:30.126 READ: bw=183MiB/s (191MB/s), 183MiB/s-183MiB/s (191MB/s-191MB/s), io=914MiB (958MB), run=5002-5002msec 00:17:30.126 ----------------------------------------------------- 00:17:30.126 Suppressions used: 00:17:30.126 count bytes template 00:17:30.126 1 11 /usr/src/fio/parse.c 00:17:30.126 1 8 libtcmalloc_minimal.so 00:17:30.126 1 904 libcrypto.so 00:17:30.126 ----------------------------------------------------- 00:17:30.126 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.126 15:44:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.126 { 00:17:30.126 "subsystems": [ 00:17:30.126 { 00:17:30.126 "subsystem": "bdev", 00:17:30.126 "config": [ 00:17:30.126 { 00:17:30.126 "params": { 00:17:30.126 "io_mechanism": "io_uring", 00:17:30.126 "conserve_cpu": false, 00:17:30.126 "filename": "/dev/nvme0n1", 00:17:30.126 "name": "xnvme_bdev" 00:17:30.126 }, 00:17:30.126 "method": "bdev_xnvme_create" 00:17:30.126 }, 00:17:30.126 { 00:17:30.126 "method": "bdev_wait_for_examine" 00:17:30.126 } 00:17:30.126 ] 00:17:30.126 } 00:17:30.126 ] 00:17:30.126 } 00:17:30.126 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:30.126 fio-3.35 00:17:30.126 Starting 1 thread 00:17:35.393 00:17:35.393 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83092: Fri Dec 6 15:44:23 2024 00:17:35.393 write: IOPS=45.4k, BW=178MiB/s (186MB/s)(888MiB/5001msec); 0 zone resets 00:17:35.393 slat (usec): min=2, max=122, avg= 3.79, stdev= 2.56 00:17:35.393 clat (usec): min=393, max=4883, avg=1257.06, stdev=142.58 00:17:35.393 lat (usec): min=396, max=4887, avg=1260.86, stdev=143.02 00:17:35.393 clat percentiles (usec): 00:17:35.393 | 1.00th=[ 1020], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1156], 00:17:35.393 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1270], 00:17:35.393 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1418], 95.00th=[ 1483], 00:17:35.393 | 99.00th=[ 1762], 99.50th=[ 1860], 99.90th=[ 2040], 99.95th=[ 2409], 00:17:35.393 | 99.99th=[ 2933] 00:17:35.393 bw ( KiB/s): min=172544, max=193024, per=100.00%, avg=182736.00, stdev=6776.93, samples=9 00:17:35.393 iops : min=43136, max=48256, avg=45684.00, stdev=1694.23, samples=9 00:17:35.393 lat (usec) : 500=0.01%, 1000=0.51% 00:17:35.393 lat (msec) : 2=99.34%, 4=0.14%, 10=0.01% 00:17:35.393 cpu : usr=29.42%, sys=69.38%, ctx=27, majf=0, minf=1064 00:17:35.393 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:35.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.393 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:35.393 issued rwts: total=0,227274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:35.393 00:17:35.393 Run status group 0 (all jobs): 00:17:35.393 WRITE: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=888MiB (931MB), run=5001-5001msec 00:17:35.961 ----------------------------------------------------- 00:17:35.961 Suppressions used: 00:17:35.961 count bytes template 00:17:35.961 1 11 /usr/src/fio/parse.c 00:17:35.961 1 8 libtcmalloc_minimal.so 00:17:35.961 1 904 libcrypto.so 00:17:35.961 ----------------------------------------------------- 00:17:35.961 00:17:35.961 00:17:35.961 real 0m12.560s 00:17:35.961 user 0m4.464s 00:17:35.961 sys 0m7.713s 00:17:35.961 15:44:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.961 15:44:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:35.961 ************************************ 00:17:35.961 END TEST xnvme_fio_plugin 00:17:35.961 ************************************ 00:17:35.961 15:44:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:35.961 15:44:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:35.961 15:44:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:35.961 15:44:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:35.961 15:44:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.961 15:44:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.961 15:44:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.961 ************************************ 00:17:35.961 START TEST xnvme_rpc 00:17:35.961 ************************************ 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:35.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83173 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83173 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83173 ']' 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.961 15:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.961 [2024-12-06 15:44:24.578918] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:35.961 [2024-12-06 15:44:24.579495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83173 ] 00:17:36.220 [2024-12-06 15:44:24.746593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.220 [2024-12-06 15:44:24.791922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.157 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.157 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 xnvme_bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83173 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83173 ']' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83173 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83173 00:17:37.158 killing process with pid 83173 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83173' 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83173 00:17:37.158 15:44:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83173 00:17:37.725 ************************************ 00:17:37.725 END TEST xnvme_rpc 00:17:37.725 ************************************ 00:17:37.725 00:17:37.725 real 0m1.786s 00:17:37.725 user 0m1.915s 00:17:37.725 sys 0m0.599s 00:17:37.725 15:44:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.725 15:44:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.725 15:44:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:37.725 15:44:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.725 15:44:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.725 15:44:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.725 ************************************ 00:17:37.725 START TEST xnvme_bdevperf 00:17:37.725 ************************************ 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:37.725 15:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:37.726 15:44:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:37.726 15:44:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:37.726 { 00:17:37.726 "subsystems": [ 00:17:37.726 { 00:17:37.726 "subsystem": "bdev", 00:17:37.726 "config": [ 00:17:37.726 { 00:17:37.726 "params": { 00:17:37.726 "io_mechanism": "io_uring", 00:17:37.726 "conserve_cpu": true, 00:17:37.726 "filename": "/dev/nvme0n1", 00:17:37.726 "name": "xnvme_bdev" 00:17:37.726 }, 00:17:37.726 "method": "bdev_xnvme_create" 00:17:37.726 }, 00:17:37.726 { 00:17:37.726 "method": "bdev_wait_for_examine" 00:17:37.726 } 00:17:37.726 ] 00:17:37.726 } 00:17:37.726 ] 00:17:37.726 } 00:17:37.726 [2024-12-06 15:44:26.393192] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:37.726 [2024-12-06 15:44:26.393406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83236 ] 00:17:37.984 [2024-12-06 15:44:26.548334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.984 [2024-12-06 15:44:26.584039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.243 Running I/O for 5 seconds... 00:17:40.116 45952.00 IOPS, 179.50 MiB/s [2024-12-06T15:44:29.745Z] 44992.00 IOPS, 175.75 MiB/s [2024-12-06T15:44:31.122Z] 44458.67 IOPS, 173.67 MiB/s [2024-12-06T15:44:32.083Z] 44080.00 IOPS, 172.19 MiB/s 00:17:43.390 Latency(us) 00:17:43.390 [2024-12-06T15:44:32.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.390 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:43.390 xnvme_bdev : 5.00 44752.59 174.81 0.00 0.00 1425.69 923.46 4140.68 00:17:43.390 [2024-12-06T15:44:32.083Z] =================================================================================================================== 00:17:43.390 [2024-12-06T15:44:32.083Z] Total : 44752.59 174.81 0.00 0.00 1425.69 923.46 4140.68 00:17:43.391 15:44:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:43.391 15:44:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:43.391 15:44:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:43.391 15:44:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:43.391 15:44:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:43.391 { 00:17:43.391 "subsystems": [ 00:17:43.391 { 00:17:43.391 "subsystem": "bdev", 00:17:43.391 "config": [ 00:17:43.391 { 00:17:43.391 "params": { 00:17:43.391 "io_mechanism": "io_uring", 00:17:43.391 "conserve_cpu": true, 00:17:43.391 "filename": "/dev/nvme0n1", 00:17:43.391 "name": "xnvme_bdev" 00:17:43.391 }, 00:17:43.391 "method": "bdev_xnvme_create" 00:17:43.391 }, 00:17:43.391 { 00:17:43.391 "method": "bdev_wait_for_examine" 00:17:43.391 } 00:17:43.391 ] 00:17:43.391 } 00:17:43.391 ] 00:17:43.391 } 00:17:43.650 [2024-12-06 15:44:32.101229] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:43.650 [2024-12-06 15:44:32.101433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83300 ] 00:17:43.650 [2024-12-06 15:44:32.257135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.650 [2024-12-06 15:44:32.298708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.909 Running I/O for 5 seconds... 00:17:45.783 45760.00 IOPS, 178.75 MiB/s [2024-12-06T15:44:35.851Z] 43999.50 IOPS, 171.87 MiB/s [2024-12-06T15:44:36.786Z] 43349.00 IOPS, 169.33 MiB/s [2024-12-06T15:44:37.720Z] 44783.75 IOPS, 174.94 MiB/s 00:17:49.027 Latency(us) 00:17:49.027 [2024-12-06T15:44:37.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.027 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:49.027 xnvme_bdev : 5.00 45399.00 177.34 0.00 0.00 1405.63 826.65 5093.93 00:17:49.027 [2024-12-06T15:44:37.720Z] =================================================================================================================== 00:17:49.027 [2024-12-06T15:44:37.721Z] Total : 45399.00 177.34 0.00 0.00 1405.63 826.65 5093.93 00:17:49.028 00:17:49.028 real 0m11.399s 00:17:49.028 user 0m4.550s 00:17:49.028 sys 0m6.291s 00:17:49.028 15:44:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.028 15:44:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.028 ************************************ 00:17:49.028 END TEST xnvme_bdevperf 00:17:49.028 ************************************ 00:17:49.286 15:44:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:49.286 15:44:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.286 15:44:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.286 15:44:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.286 ************************************ 00:17:49.286 START TEST xnvme_fio_plugin 00:17:49.286 ************************************ 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:49.286 15:44:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:49.286 { 00:17:49.287 "subsystems": [ 00:17:49.287 { 00:17:49.287 "subsystem": "bdev", 00:17:49.287 "config": [ 00:17:49.287 { 00:17:49.287 "params": { 00:17:49.287 "io_mechanism": "io_uring", 00:17:49.287 "conserve_cpu": true, 00:17:49.287 "filename": "/dev/nvme0n1", 00:17:49.287 "name": "xnvme_bdev" 00:17:49.287 }, 00:17:49.287 "method": "bdev_xnvme_create" 00:17:49.287 }, 00:17:49.287 { 00:17:49.287 "method": "bdev_wait_for_examine" 00:17:49.287 } 00:17:49.287 ] 00:17:49.287 } 00:17:49.287 ] 00:17:49.287 } 00:17:49.544 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:49.544 fio-3.35 00:17:49.544 Starting 1 thread 00:17:54.817 00:17:54.817 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83407: Fri Dec 6 15:44:43 2024 00:17:54.817 read: IOPS=43.4k, BW=169MiB/s (178MB/s)(847MiB/5001msec) 00:17:54.817 slat (nsec): min=2431, max=73225, avg=4177.15, stdev=2037.46 00:17:54.817 clat (usec): min=879, max=2792, avg=1307.35, stdev=169.83 00:17:54.817 lat (usec): min=883, max=2829, avg=1311.53, stdev=170.66 00:17:54.817 clat percentiles (usec): 00:17:54.817 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:17:54.817 | 30.00th=[ 1205], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1336], 00:17:54.817 | 70.00th=[ 1369], 80.00th=[ 1434], 90.00th=[ 1516], 95.00th=[ 1598], 00:17:54.817 | 99.00th=[ 1876], 99.50th=[ 1958], 99.90th=[ 2114], 99.95th=[ 2245], 00:17:54.817 | 99.99th=[ 2638] 00:17:54.817 bw ( KiB/s): min=151552, max=190976, per=100.00%, avg=174080.00, stdev=13745.56, samples=9 00:17:54.817 iops : min=37888, max=47744, avg=43520.00, stdev=3436.39, samples=9 00:17:54.817 lat (usec) : 1000=0.58% 00:17:54.817 lat (msec) : 2=99.07%, 4=0.35% 00:17:54.817 cpu : usr=36.24%, sys=58.90%, ctx=14, majf=0, minf=1063 00:17:54.817 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:54.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.817 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:54.817 issued rwts: total=216896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:54.818 00:17:54.818 Run status group 0 (all jobs): 00:17:54.818 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=847MiB (888MB), run=5001-5001msec 00:17:55.385 ----------------------------------------------------- 00:17:55.385 Suppressions used: 00:17:55.385 count bytes template 00:17:55.385 1 11 /usr/src/fio/parse.c 00:17:55.385 1 8 libtcmalloc_minimal.so 00:17:55.386 1 904 libcrypto.so 00:17:55.386 ----------------------------------------------------- 00:17:55.386 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:55.386 15:44:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.386 { 00:17:55.386 "subsystems": [ 00:17:55.386 { 00:17:55.386 "subsystem": "bdev", 00:17:55.386 "config": [ 00:17:55.386 { 00:17:55.386 "params": { 00:17:55.386 "io_mechanism": "io_uring", 00:17:55.386 "conserve_cpu": true, 00:17:55.386 "filename": "/dev/nvme0n1", 00:17:55.386 "name": "xnvme_bdev" 00:17:55.386 }, 00:17:55.386 "method": "bdev_xnvme_create" 00:17:55.386 }, 00:17:55.386 { 00:17:55.386 "method": "bdev_wait_for_examine" 00:17:55.386 } 00:17:55.386 ] 00:17:55.386 } 00:17:55.386 ] 00:17:55.386 } 00:17:55.645 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:55.645 fio-3.35 00:17:55.645 Starting 1 thread 00:18:02.209 00:18:02.210 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83489: Fri Dec 6 15:44:49 2024 00:18:02.210 write: IOPS=43.8k, BW=171MiB/s (179MB/s)(855MiB/5001msec); 0 zone resets 00:18:02.210 slat (nsec): min=2573, max=89582, avg=3852.12, stdev=2119.22 00:18:02.210 clat (usec): min=953, max=2817, avg=1307.28, stdev=176.09 00:18:02.210 lat (usec): min=956, max=2854, avg=1311.13, stdev=176.89 00:18:02.210 clat percentiles (usec): 00:18:02.210 | 1.00th=[ 1037], 5.00th=[ 1090], 10.00th=[ 1106], 20.00th=[ 1156], 00:18:02.210 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1319], 00:18:02.210 | 70.00th=[ 1385], 80.00th=[ 1467], 90.00th=[ 1565], 95.00th=[ 1631], 00:18:02.210 | 99.00th=[ 1778], 99.50th=[ 1827], 99.90th=[ 2040], 99.95th=[ 2278], 00:18:02.210 | 99.99th=[ 2671] 00:18:02.210 bw ( KiB/s): min=146650, max=190976, per=99.03%, avg=173354.89, stdev=18407.93, samples=9 00:18:02.210 iops : min=36662, max=47744, avg=43338.67, stdev=4602.07, samples=9 00:18:02.210 lat (usec) : 1000=0.07% 00:18:02.210 lat (msec) : 2=99.80%, 4=0.13% 00:18:02.210 cpu : usr=34.26%, sys=60.72%, ctx=14, majf=0, minf=1064 00:18:02.210 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:02.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.210 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:02.210 issued rwts: total=0,218865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:02.210 00:18:02.210 Run status group 0 (all jobs): 00:18:02.210 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=855MiB (896MB), run=5001-5001msec 00:18:02.210 ----------------------------------------------------- 00:18:02.210 Suppressions used: 00:18:02.210 count bytes template 00:18:02.210 1 11 /usr/src/fio/parse.c 00:18:02.210 1 8 libtcmalloc_minimal.so 00:18:02.210 1 904 libcrypto.so 00:18:02.210 ----------------------------------------------------- 00:18:02.210 00:18:02.210 ************************************ 00:18:02.210 END TEST xnvme_fio_plugin 00:18:02.210 ************************************ 00:18:02.210 00:18:02.210 real 0m12.482s 00:18:02.210 user 0m5.064s 00:18:02.210 sys 0m6.661s 00:18:02.210 15:44:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.210 15:44:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:02.210 15:44:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:02.210 15:44:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.210 15:44:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.210 15:44:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.210 ************************************ 00:18:02.210 START TEST xnvme_rpc 00:18:02.210 ************************************ 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83570 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83570 00:18:02.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83570 ']' 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.210 [2024-12-06 15:44:50.397017] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:02.210 [2024-12-06 15:44:50.397167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83570 ] 00:18:02.210 [2024-12-06 15:44:50.543891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.210 [2024-12-06 15:44:50.581274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.210 xnvme_bdev 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.210 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.469 15:44:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83570 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83570 ']' 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83570 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:02.469 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83570 00:18:02.470 killing process with pid 83570 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83570' 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83570 00:18:02.470 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83570 00:18:03.036 00:18:03.036 real 0m1.298s 00:18:03.036 user 0m1.359s 00:18:03.036 sys 0m0.439s 00:18:03.036 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.036 ************************************ 00:18:03.036 END TEST xnvme_rpc 00:18:03.036 ************************************ 00:18:03.036 15:44:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 15:44:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:03.036 15:44:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:03.036 15:44:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.036 15:44:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 ************************************ 00:18:03.036 START TEST xnvme_bdevperf 00:18:03.036 ************************************ 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:03.036 15:44:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 { 00:18:03.036 "subsystems": [ 00:18:03.036 { 00:18:03.036 "subsystem": "bdev", 00:18:03.036 "config": [ 00:18:03.036 { 00:18:03.036 "params": { 00:18:03.036 "io_mechanism": "io_uring_cmd", 00:18:03.036 "conserve_cpu": false, 00:18:03.036 "filename": "/dev/ng0n1", 00:18:03.036 "name": "xnvme_bdev" 00:18:03.036 }, 00:18:03.036 "method": "bdev_xnvme_create" 00:18:03.036 }, 00:18:03.036 { 00:18:03.036 "method": "bdev_wait_for_examine" 00:18:03.036 } 00:18:03.036 ] 00:18:03.036 } 00:18:03.036 ] 00:18:03.036 } 00:18:03.295 [2024-12-06 15:44:51.754890] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:03.295 [2024-12-06 15:44:51.755091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83620 ] 00:18:03.295 [2024-12-06 15:44:51.912790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.295 [2024-12-06 15:44:51.955014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.553 Running I/O for 5 seconds... 00:18:05.427 50752.00 IOPS, 198.25 MiB/s [2024-12-06T15:44:55.498Z] 50016.00 IOPS, 195.38 MiB/s [2024-12-06T15:44:56.433Z] 49557.33 IOPS, 193.58 MiB/s [2024-12-06T15:44:57.371Z] 49488.00 IOPS, 193.31 MiB/s 00:18:08.678 Latency(us) 00:18:08.678 [2024-12-06T15:44:57.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.678 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:08.678 xnvme_bdev : 5.00 49446.22 193.15 0.00 0.00 1291.23 968.15 2755.49 00:18:08.678 [2024-12-06T15:44:57.371Z] =================================================================================================================== 00:18:08.678 [2024-12-06T15:44:57.371Z] Total : 49446.22 193.15 0.00 0.00 1291.23 968.15 2755.49 00:18:08.678 15:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:08.678 15:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:08.678 15:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:08.678 15:44:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:08.678 15:44:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:08.678 { 00:18:08.678 "subsystems": [ 00:18:08.678 { 00:18:08.678 "subsystem": "bdev", 00:18:08.678 "config": [ 00:18:08.678 { 00:18:08.678 "params": { 00:18:08.678 "io_mechanism": "io_uring_cmd", 00:18:08.678 "conserve_cpu": false, 00:18:08.678 "filename": "/dev/ng0n1", 00:18:08.678 "name": "xnvme_bdev" 00:18:08.678 }, 00:18:08.678 "method": "bdev_xnvme_create" 00:18:08.678 }, 00:18:08.678 { 00:18:08.678 "method": "bdev_wait_for_examine" 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 } 00:18:08.678 ] 00:18:08.678 } 00:18:08.937 [2024-12-06 15:44:57.421082] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:08.937 [2024-12-06 15:44:57.421286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83690 ] 00:18:08.937 [2024-12-06 15:44:57.577579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.937 [2024-12-06 15:44:57.617097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.195 Running I/O for 5 seconds... 00:18:11.069 48576.00 IOPS, 189.75 MiB/s [2024-12-06T15:45:01.137Z] 49248.00 IOPS, 192.38 MiB/s [2024-12-06T15:45:02.072Z] 49109.33 IOPS, 191.83 MiB/s [2024-12-06T15:45:03.011Z] 48144.00 IOPS, 188.06 MiB/s 00:18:14.318 Latency(us) 00:18:14.318 [2024-12-06T15:45:03.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.319 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:14.319 xnvme_bdev : 5.00 47034.09 183.73 0.00 0.00 1356.90 893.67 3961.95 00:18:14.319 [2024-12-06T15:45:03.012Z] =================================================================================================================== 00:18:14.319 [2024-12-06T15:45:03.012Z] Total : 47034.09 183.73 0.00 0.00 1356.90 893.67 3961.95 00:18:14.319 15:45:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:14.319 15:45:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:14.319 15:45:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:14.319 15:45:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:14.319 15:45:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:14.578 { 00:18:14.578 "subsystems": [ 00:18:14.578 { 00:18:14.578 "subsystem": "bdev", 00:18:14.578 "config": [ 00:18:14.578 { 00:18:14.578 "params": { 00:18:14.578 "io_mechanism": "io_uring_cmd", 00:18:14.578 "conserve_cpu": false, 00:18:14.578 "filename": "/dev/ng0n1", 00:18:14.578 "name": "xnvme_bdev" 00:18:14.578 }, 00:18:14.578 "method": "bdev_xnvme_create" 00:18:14.578 }, 00:18:14.578 { 00:18:14.578 "method": "bdev_wait_for_examine" 00:18:14.578 } 00:18:14.578 ] 00:18:14.578 } 00:18:14.578 ] 00:18:14.578 } 00:18:14.578 [2024-12-06 15:45:03.114787] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:14.578 [2024-12-06 15:45:03.115329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83753 ] 00:18:14.837 [2024-12-06 15:45:03.271676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.837 [2024-12-06 15:45:03.309809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.837 Running I/O for 5 seconds... 00:18:17.156 80448.00 IOPS, 314.25 MiB/s [2024-12-06T15:45:06.786Z] 80800.00 IOPS, 315.62 MiB/s [2024-12-06T15:45:07.724Z] 81109.33 IOPS, 316.83 MiB/s [2024-12-06T15:45:08.661Z] 81056.00 IOPS, 316.62 MiB/s [2024-12-06T15:45:08.661Z] 80844.80 IOPS, 315.80 MiB/s 00:18:19.968 Latency(us) 00:18:19.968 [2024-12-06T15:45:08.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.968 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:19.968 xnvme_bdev : 5.00 80817.58 315.69 0.00 0.00 788.67 286.72 3574.69 00:18:19.968 [2024-12-06T15:45:08.661Z] =================================================================================================================== 00:18:19.968 [2024-12-06T15:45:08.661Z] Total : 80817.58 315.69 0.00 0.00 788.67 286.72 3574.69 00:18:20.227 15:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:20.227 15:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:20.227 15:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:20.227 15:45:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:20.227 15:45:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:20.227 { 00:18:20.227 "subsystems": [ 00:18:20.227 { 00:18:20.227 "subsystem": "bdev", 00:18:20.227 "config": [ 00:18:20.227 { 00:18:20.227 "params": { 00:18:20.227 "io_mechanism": "io_uring_cmd", 00:18:20.227 "conserve_cpu": false, 00:18:20.227 "filename": "/dev/ng0n1", 00:18:20.227 "name": "xnvme_bdev" 00:18:20.227 }, 00:18:20.227 "method": "bdev_xnvme_create" 00:18:20.227 }, 00:18:20.227 { 00:18:20.227 "method": "bdev_wait_for_examine" 00:18:20.227 } 00:18:20.227 ] 00:18:20.227 } 00:18:20.227 ] 00:18:20.227 } 00:18:20.227 [2024-12-06 15:45:08.798665] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:20.227 [2024-12-06 15:45:08.798871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83822 ] 00:18:20.485 [2024-12-06 15:45:08.955820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.486 [2024-12-06 15:45:08.998499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.486 Running I/O for 5 seconds... 00:18:22.794 53349.00 IOPS, 208.39 MiB/s [2024-12-06T15:45:12.416Z] 52933.50 IOPS, 206.77 MiB/s [2024-12-06T15:45:13.348Z] 54603.33 IOPS, 213.29 MiB/s [2024-12-06T15:45:14.282Z] 54938.50 IOPS, 214.60 MiB/s [2024-12-06T15:45:14.282Z] 55104.40 IOPS, 215.25 MiB/s 00:18:25.589 Latency(us) 00:18:25.589 [2024-12-06T15:45:14.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.589 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:25.589 xnvme_bdev : 5.00 55080.84 215.16 0.00 0.00 1158.32 290.44 16562.73 00:18:25.589 [2024-12-06T15:45:14.282Z] =================================================================================================================== 00:18:25.589 [2024-12-06T15:45:14.282Z] Total : 55080.84 215.16 0.00 0.00 1158.32 290.44 16562.73 00:18:25.848 00:18:25.848 real 0m22.717s 00:18:25.848 user 0m9.124s 00:18:25.848 sys 0m13.196s 00:18:25.848 15:45:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.848 ************************************ 00:18:25.848 END TEST xnvme_bdevperf 00:18:25.848 ************************************ 00:18:25.848 15:45:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:25.848 15:45:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:25.848 15:45:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.848 15:45:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.848 15:45:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.848 ************************************ 00:18:25.848 START TEST xnvme_fio_plugin 00:18:25.848 ************************************ 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.848 15:45:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.848 { 00:18:25.848 "subsystems": [ 00:18:25.848 { 00:18:25.848 "subsystem": "bdev", 00:18:25.848 "config": [ 00:18:25.848 { 00:18:25.848 "params": { 00:18:25.848 "io_mechanism": "io_uring_cmd", 00:18:25.848 "conserve_cpu": false, 00:18:25.848 "filename": "/dev/ng0n1", 00:18:25.848 "name": "xnvme_bdev" 00:18:25.848 }, 00:18:25.848 "method": "bdev_xnvme_create" 00:18:25.848 }, 00:18:25.848 { 00:18:25.848 "method": "bdev_wait_for_examine" 00:18:25.848 } 00:18:25.848 ] 00:18:25.848 } 00:18:25.848 ] 00:18:25.848 } 00:18:26.107 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:26.107 fio-3.35 00:18:26.107 Starting 1 thread 00:18:32.667 00:18:32.668 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83928: Fri Dec 6 15:45:20 2024 00:18:32.668 read: IOPS=47.3k, BW=185MiB/s (194MB/s)(923MiB/5001msec) 00:18:32.668 slat (usec): min=2, max=109, avg= 3.38, stdev= 2.10 00:18:32.668 clat (usec): min=903, max=3838, avg=1217.08, stdev=142.22 00:18:32.668 lat (usec): min=905, max=3841, avg=1220.46, stdev=142.76 00:18:32.668 clat percentiles (usec): 00:18:32.668 | 1.00th=[ 996], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1106], 00:18:32.668 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1221], 00:18:32.668 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1401], 95.00th=[ 1500], 00:18:32.668 | 99.00th=[ 1647], 99.50th=[ 1729], 99.90th=[ 1942], 99.95th=[ 2245], 00:18:32.668 | 99.99th=[ 2769] 00:18:32.668 bw ( KiB/s): min=154624, max=202240, per=99.96%, avg=188928.00, stdev=15791.28, samples=9 00:18:32.668 iops : min=38656, max=50560, avg=47232.00, stdev=3947.82, samples=9 00:18:32.668 lat (usec) : 1000=1.38% 00:18:32.668 lat (msec) : 2=98.55%, 4=0.07% 00:18:32.668 cpu : usr=31.80%, sys=66.90%, ctx=12, majf=0, minf=1063 00:18:32.668 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:32.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.668 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:32.668 issued rwts: total=236306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.668 00:18:32.668 Run status group 0 (all jobs): 00:18:32.668 READ: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=923MiB (968MB), run=5001-5001msec 00:18:32.668 ----------------------------------------------------- 00:18:32.668 Suppressions used: 00:18:32.668 count bytes template 00:18:32.668 1 11 /usr/src/fio/parse.c 00:18:32.668 1 8 libtcmalloc_minimal.so 00:18:32.668 1 904 libcrypto.so 00:18:32.668 ----------------------------------------------------- 00:18:32.668 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.668 15:45:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.668 { 00:18:32.668 "subsystems": [ 00:18:32.668 { 00:18:32.668 "subsystem": "bdev", 00:18:32.668 "config": [ 00:18:32.668 { 00:18:32.668 "params": { 00:18:32.668 "io_mechanism": "io_uring_cmd", 00:18:32.668 "conserve_cpu": false, 00:18:32.668 "filename": "/dev/ng0n1", 00:18:32.668 "name": "xnvme_bdev" 00:18:32.668 }, 00:18:32.668 "method": "bdev_xnvme_create" 00:18:32.668 }, 00:18:32.668 { 00:18:32.668 "method": "bdev_wait_for_examine" 00:18:32.668 } 00:18:32.668 ] 00:18:32.668 } 00:18:32.668 ] 00:18:32.668 } 00:18:32.668 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:32.668 fio-3.35 00:18:32.668 Starting 1 thread 00:18:37.942 00:18:37.942 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84009: Fri Dec 6 15:45:26 2024 00:18:37.942 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(846MiB/5001msec); 0 zone resets 00:18:37.942 slat (nsec): min=2505, max=95440, avg=4234.84, stdev=2247.97 00:18:37.942 clat (usec): min=276, max=2835, avg=1307.27, stdev=160.12 00:18:37.942 lat (usec): min=292, max=2874, avg=1311.50, stdev=160.66 00:18:37.942 clat percentiles (usec): 00:18:37.942 | 1.00th=[ 1012], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1172], 00:18:37.942 | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1319], 00:18:37.942 | 70.00th=[ 1369], 80.00th=[ 1418], 90.00th=[ 1516], 95.00th=[ 1598], 00:18:37.942 | 99.00th=[ 1795], 99.50th=[ 1876], 99.90th=[ 2089], 99.95th=[ 2245], 00:18:37.943 | 99.99th=[ 2606] 00:18:37.943 bw ( KiB/s): min=150904, max=183808, per=98.66%, avg=170979.56, stdev=11735.98, samples=9 00:18:37.943 iops : min=37726, max=45952, avg=42744.89, stdev=2934.00, samples=9 00:18:37.943 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.67% 00:18:37.943 lat (msec) : 2=99.04%, 4=0.20% 00:18:37.943 cpu : usr=33.30%, sys=65.46%, ctx=9, majf=0, minf=1064 00:18:37.943 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:18:37.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.943 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:37.943 issued rwts: total=0,216669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:37.943 00:18:37.943 Run status group 0 (all jobs): 00:18:37.943 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=846MiB (887MB), run=5001-5001msec 00:18:38.511 ----------------------------------------------------- 00:18:38.511 Suppressions used: 00:18:38.511 count bytes template 00:18:38.511 1 11 /usr/src/fio/parse.c 00:18:38.511 1 8 libtcmalloc_minimal.so 00:18:38.511 1 904 libcrypto.so 00:18:38.511 ----------------------------------------------------- 00:18:38.511 00:18:38.511 ************************************ 00:18:38.511 END TEST xnvme_fio_plugin 00:18:38.511 ************************************ 00:18:38.511 00:18:38.511 real 0m12.606s 00:18:38.511 user 0m4.856s 00:18:38.511 sys 0m7.351s 00:18:38.511 15:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.511 15:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:38.511 15:45:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:38.511 15:45:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:38.511 15:45:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:38.511 15:45:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:38.511 15:45:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.511 15:45:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.511 15:45:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.511 ************************************ 00:18:38.511 START TEST xnvme_rpc 00:18:38.511 ************************************ 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:38.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84089 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84089 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84089 ']' 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.511 15:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:38.770 [2024-12-06 15:45:27.217202] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:38.770 [2024-12-06 15:45:27.217439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84089 ] 00:18:38.770 [2024-12-06 15:45:27.372126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.770 [2024-12-06 15:45:27.413886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 xnvme_bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84089 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84089 ']' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84089 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.707 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84089 00:18:39.966 killing process with pid 84089 00:18:39.966 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.966 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.966 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84089' 00:18:39.966 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84089 00:18:39.966 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84089 00:18:40.225 00:18:40.225 real 0m1.797s 00:18:40.225 user 0m2.013s 00:18:40.225 sys 0m0.515s 00:18:40.225 ************************************ 00:18:40.225 END TEST xnvme_rpc 00:18:40.225 ************************************ 00:18:40.225 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.225 15:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.484 15:45:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:40.484 15:45:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.484 15:45:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.484 15:45:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.484 ************************************ 00:18:40.484 START TEST xnvme_bdevperf 00:18:40.484 ************************************ 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:40.484 15:45:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:40.484 { 00:18:40.484 "subsystems": [ 00:18:40.484 { 00:18:40.484 "subsystem": "bdev", 00:18:40.484 "config": [ 00:18:40.484 { 00:18:40.484 "params": { 00:18:40.484 "io_mechanism": "io_uring_cmd", 00:18:40.484 "conserve_cpu": true, 00:18:40.484 "filename": "/dev/ng0n1", 00:18:40.484 "name": "xnvme_bdev" 00:18:40.484 }, 00:18:40.484 "method": "bdev_xnvme_create" 00:18:40.484 }, 00:18:40.484 { 00:18:40.484 "method": "bdev_wait_for_examine" 00:18:40.484 } 00:18:40.484 ] 00:18:40.484 } 00:18:40.484 ] 00:18:40.484 } 00:18:40.484 [2024-12-06 15:45:29.023993] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:40.484 [2024-12-06 15:45:29.024145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84151 ] 00:18:40.484 [2024-12-06 15:45:29.169315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.745 [2024-12-06 15:45:29.216928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.745 Running I/O for 5 seconds... 00:18:42.687 45288.00 IOPS, 176.91 MiB/s [2024-12-06T15:45:32.757Z] 46628.00 IOPS, 182.14 MiB/s [2024-12-06T15:45:33.694Z] 47185.00 IOPS, 184.32 MiB/s [2024-12-06T15:45:34.630Z] 46969.00 IOPS, 183.47 MiB/s 00:18:45.937 Latency(us) 00:18:45.937 [2024-12-06T15:45:34.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.937 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:45.937 xnvme_bdev : 5.00 46511.64 181.69 0.00 0.00 1372.23 409.60 4855.62 00:18:45.937 [2024-12-06T15:45:34.630Z] =================================================================================================================== 00:18:45.937 [2024-12-06T15:45:34.631Z] Total : 46511.64 181.69 0.00 0.00 1372.23 409.60 4855.62 00:18:45.938 15:45:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:45.938 15:45:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:45.938 15:45:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:45.938 15:45:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:45.938 15:45:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:46.211 { 00:18:46.211 "subsystems": [ 00:18:46.211 { 00:18:46.211 "subsystem": "bdev", 00:18:46.211 "config": [ 00:18:46.211 { 00:18:46.211 "params": { 00:18:46.211 "io_mechanism": "io_uring_cmd", 00:18:46.211 "conserve_cpu": true, 00:18:46.211 "filename": "/dev/ng0n1", 00:18:46.211 "name": "xnvme_bdev" 00:18:46.211 }, 00:18:46.211 "method": "bdev_xnvme_create" 00:18:46.211 }, 00:18:46.211 { 00:18:46.211 "method": "bdev_wait_for_examine" 00:18:46.211 } 00:18:46.211 ] 00:18:46.211 } 00:18:46.211 ] 00:18:46.211 } 00:18:46.211 [2024-12-06 15:45:34.728301] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:46.211 [2024-12-06 15:45:34.728718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84210 ] 00:18:46.211 [2024-12-06 15:45:34.883457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.470 [2024-12-06 15:45:34.921356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.470 Running I/O for 5 seconds... 00:18:48.779 44213.00 IOPS, 172.71 MiB/s [2024-12-06T15:45:38.407Z] 43545.00 IOPS, 170.10 MiB/s [2024-12-06T15:45:39.343Z] 43707.33 IOPS, 170.73 MiB/s [2024-12-06T15:45:40.279Z] 43660.50 IOPS, 170.55 MiB/s 00:18:51.586 Latency(us) 00:18:51.586 [2024-12-06T15:45:40.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.586 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:51.586 xnvme_bdev : 5.00 43890.55 171.45 0.00 0.00 1453.19 696.32 4557.73 00:18:51.586 [2024-12-06T15:45:40.279Z] =================================================================================================================== 00:18:51.586 [2024-12-06T15:45:40.279Z] Total : 43890.55 171.45 0.00 0.00 1453.19 696.32 4557.73 00:18:51.844 15:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:51.844 15:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:51.844 15:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:51.844 15:45:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:51.844 15:45:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:51.844 { 00:18:51.844 "subsystems": [ 00:18:51.844 { 00:18:51.844 "subsystem": "bdev", 00:18:51.844 "config": [ 00:18:51.844 { 00:18:51.844 "params": { 00:18:51.844 "io_mechanism": "io_uring_cmd", 00:18:51.844 "conserve_cpu": true, 00:18:51.844 "filename": "/dev/ng0n1", 00:18:51.844 "name": "xnvme_bdev" 00:18:51.844 }, 00:18:51.844 "method": "bdev_xnvme_create" 00:18:51.844 }, 00:18:51.844 { 00:18:51.844 "method": "bdev_wait_for_examine" 00:18:51.844 } 00:18:51.844 ] 00:18:51.844 } 00:18:51.844 ] 00:18:51.844 } 00:18:51.844 [2024-12-06 15:45:40.434237] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:51.844 [2024-12-06 15:45:40.434418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84279 ] 00:18:52.102 [2024-12-06 15:45:40.588222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.102 [2024-12-06 15:45:40.628469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.102 Running I/O for 5 seconds... 00:18:54.410 80064.00 IOPS, 312.75 MiB/s [2024-12-06T15:45:44.039Z] 83360.00 IOPS, 325.62 MiB/s [2024-12-06T15:45:44.976Z] 84736.00 IOPS, 331.00 MiB/s [2024-12-06T15:45:45.909Z] 84528.00 IOPS, 330.19 MiB/s [2024-12-06T15:45:45.909Z] 83520.00 IOPS, 326.25 MiB/s 00:18:57.216 Latency(us) 00:18:57.216 [2024-12-06T15:45:45.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.216 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:57.216 xnvme_bdev : 5.00 83502.72 326.18 0.00 0.00 763.34 465.45 2666.12 00:18:57.216 [2024-12-06T15:45:45.909Z] =================================================================================================================== 00:18:57.216 [2024-12-06T15:45:45.909Z] Total : 83502.72 326.18 0.00 0.00 763.34 465.45 2666.12 00:18:57.474 15:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:57.474 15:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:57.474 15:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:57.474 15:45:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:57.474 15:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.474 { 00:18:57.474 "subsystems": [ 00:18:57.474 { 00:18:57.474 "subsystem": "bdev", 00:18:57.474 "config": [ 00:18:57.474 { 00:18:57.474 "params": { 00:18:57.474 "io_mechanism": "io_uring_cmd", 00:18:57.474 "conserve_cpu": true, 00:18:57.474 "filename": "/dev/ng0n1", 00:18:57.474 "name": "xnvme_bdev" 00:18:57.474 }, 00:18:57.474 "method": "bdev_xnvme_create" 00:18:57.474 }, 00:18:57.474 { 00:18:57.474 "method": "bdev_wait_for_examine" 00:18:57.474 } 00:18:57.474 ] 00:18:57.474 } 00:18:57.474 ] 00:18:57.474 } 00:18:57.474 [2024-12-06 15:45:46.102760] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:57.474 [2024-12-06 15:45:46.102971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84342 ] 00:18:57.737 [2024-12-06 15:45:46.251423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.737 [2024-12-06 15:45:46.290340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.737 Running I/O for 5 seconds... 00:19:00.045 43489.00 IOPS, 169.88 MiB/s [2024-12-06T15:45:49.672Z] 44775.50 IOPS, 174.90 MiB/s [2024-12-06T15:45:50.606Z] 45071.33 IOPS, 176.06 MiB/s [2024-12-06T15:45:51.541Z] 45261.50 IOPS, 176.80 MiB/s 00:19:02.848 Latency(us) 00:19:02.848 [2024-12-06T15:45:51.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.848 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:02.848 xnvme_bdev : 5.00 45363.68 177.20 0.00 0.00 1404.89 338.85 16681.89 00:19:02.848 [2024-12-06T15:45:51.541Z] =================================================================================================================== 00:19:02.848 [2024-12-06T15:45:51.541Z] Total : 45363.68 177.20 0.00 0.00 1404.89 338.85 16681.89 00:19:03.108 00:19:03.108 real 0m22.737s 00:19:03.108 user 0m11.664s 00:19:03.108 sys 0m8.628s 00:19:03.108 15:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.108 15:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.108 ************************************ 00:19:03.108 END TEST xnvme_bdevperf 00:19:03.108 ************************************ 00:19:03.108 15:45:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:03.108 15:45:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.108 15:45:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.108 15:45:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:03.108 ************************************ 00:19:03.108 START TEST xnvme_fio_plugin 00:19:03.108 ************************************ 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.108 15:45:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:03.108 { 00:19:03.108 "subsystems": [ 00:19:03.108 { 00:19:03.108 "subsystem": "bdev", 00:19:03.108 "config": [ 00:19:03.108 { 00:19:03.108 "params": { 00:19:03.108 "io_mechanism": "io_uring_cmd", 00:19:03.108 "conserve_cpu": true, 00:19:03.108 "filename": "/dev/ng0n1", 00:19:03.108 "name": "xnvme_bdev" 00:19:03.108 }, 00:19:03.108 "method": "bdev_xnvme_create" 00:19:03.108 }, 00:19:03.108 { 00:19:03.108 "method": "bdev_wait_for_examine" 00:19:03.108 } 00:19:03.108 ] 00:19:03.108 } 00:19:03.108 ] 00:19:03.108 } 00:19:03.367 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:03.367 fio-3.35 00:19:03.367 Starting 1 thread 00:19:09.930 00:19:09.930 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84449: Fri Dec 6 15:45:57 2024 00:19:09.930 read: IOPS=44.6k, BW=174MiB/s (183MB/s)(871MiB/5001msec) 00:19:09.930 slat (usec): min=2, max=202, avg= 3.98, stdev= 2.13 00:19:09.930 clat (usec): min=869, max=7021, avg=1274.37, stdev=158.33 00:19:09.930 lat (usec): min=872, max=7025, avg=1278.35, stdev=158.91 00:19:09.930 clat percentiles (usec): 00:19:09.930 | 1.00th=[ 1012], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1139], 00:19:09.930 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1287], 00:19:09.930 | 70.00th=[ 1336], 80.00th=[ 1401], 90.00th=[ 1483], 95.00th=[ 1549], 00:19:09.930 | 99.00th=[ 1713], 99.50th=[ 1795], 99.90th=[ 2089], 99.95th=[ 2245], 00:19:09.930 | 99.99th=[ 2442] 00:19:09.930 bw ( KiB/s): min=155136, max=199680, per=99.72%, avg=177945.78, stdev=15096.40, samples=9 00:19:09.930 iops : min=38784, max=49920, avg=44486.44, stdev=3774.10, samples=9 00:19:09.930 lat (usec) : 1000=0.69% 00:19:09.930 lat (msec) : 2=99.17%, 4=0.13%, 10=0.01% 00:19:09.930 cpu : usr=38.42%, sys=57.72%, ctx=12, majf=0, minf=1063 00:19:09.930 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:09.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.930 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:09.930 issued rwts: total=223099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:09.930 00:19:09.930 Run status group 0 (all jobs): 00:19:09.930 READ: bw=174MiB/s (183MB/s), 174MiB/s-174MiB/s (183MB/s-183MB/s), io=871MiB (914MB), run=5001-5001msec 00:19:09.930 ----------------------------------------------------- 00:19:09.930 Suppressions used: 00:19:09.930 count bytes template 00:19:09.931 1 11 /usr/src/fio/parse.c 00:19:09.931 1 8 libtcmalloc_minimal.so 00:19:09.931 1 904 libcrypto.so 00:19:09.931 ----------------------------------------------------- 00:19:09.931 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.931 15:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:09.931 { 00:19:09.931 "subsystems": [ 00:19:09.931 { 00:19:09.931 "subsystem": "bdev", 00:19:09.931 "config": [ 00:19:09.931 { 00:19:09.931 "params": { 00:19:09.931 "io_mechanism": "io_uring_cmd", 00:19:09.931 "conserve_cpu": true, 00:19:09.931 "filename": "/dev/ng0n1", 00:19:09.931 "name": "xnvme_bdev" 00:19:09.931 }, 00:19:09.931 "method": "bdev_xnvme_create" 00:19:09.931 }, 00:19:09.931 { 00:19:09.931 "method": "bdev_wait_for_examine" 00:19:09.931 } 00:19:09.931 ] 00:19:09.931 } 00:19:09.931 ] 00:19:09.931 } 00:19:09.931 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:09.931 fio-3.35 00:19:09.931 Starting 1 thread 00:19:15.260 00:19:15.261 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84532: Fri Dec 6 15:46:03 2024 00:19:15.261 write: IOPS=46.0k, BW=180MiB/s (188MB/s)(899MiB/5001msec); 0 zone resets 00:19:15.261 slat (nsec): min=2527, max=49264, avg=3368.19, stdev=1783.99 00:19:15.261 clat (usec): min=647, max=3570, avg=1256.93, stdev=130.18 00:19:15.261 lat (usec): min=650, max=3579, avg=1260.30, stdev=130.53 00:19:15.261 clat percentiles (usec): 00:19:15.261 | 1.00th=[ 1045], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1156], 00:19:15.261 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[ 1237], 60.00th=[ 1270], 00:19:15.261 | 70.00th=[ 1303], 80.00th=[ 1336], 90.00th=[ 1418], 95.00th=[ 1483], 00:19:15.261 | 99.00th=[ 1663], 99.50th=[ 1745], 99.90th=[ 1942], 99.95th=[ 2474], 00:19:15.261 | 99.99th=[ 3490] 00:19:15.261 bw ( KiB/s): min=160768, max=190976, per=100.00%, avg=184109.89, stdev=9034.65, samples=9 00:19:15.261 iops : min=40192, max=47744, avg=46027.33, stdev=2258.66, samples=9 00:19:15.261 lat (usec) : 750=0.01%, 1000=0.21% 00:19:15.261 lat (msec) : 2=99.70%, 4=0.09% 00:19:15.261 cpu : usr=41.04%, sys=55.06%, ctx=5, majf=0, minf=1064 00:19:15.261 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:15.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.261 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:15.261 issued rwts: total=0,230057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.261 00:19:15.261 Run status group 0 (all jobs): 00:19:15.261 WRITE: bw=180MiB/s (188MB/s), 180MiB/s-180MiB/s (188MB/s-188MB/s), io=899MiB (942MB), run=5001-5001msec 00:19:15.519 ----------------------------------------------------- 00:19:15.519 Suppressions used: 00:19:15.519 count bytes template 00:19:15.519 1 11 /usr/src/fio/parse.c 00:19:15.519 1 8 libtcmalloc_minimal.so 00:19:15.519 1 904 libcrypto.so 00:19:15.519 ----------------------------------------------------- 00:19:15.519 00:19:15.519 00:19:15.519 real 0m12.409s 00:19:15.519 user 0m5.420s 00:19:15.519 sys 0m6.336s 00:19:15.519 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.519 15:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:15.519 ************************************ 00:19:15.519 END TEST xnvme_fio_plugin 00:19:15.519 ************************************ 00:19:15.519 15:46:04 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84089 00:19:15.519 15:46:04 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84089 ']' 00:19:15.519 15:46:04 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84089 00:19:15.519 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84089) - No such process 00:19:15.519 Process with pid 84089 is not found 00:19:15.519 15:46:04 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84089 is not found' 00:19:15.519 00:19:15.519 real 3m5.096s 00:19:15.519 user 1m14.857s 00:19:15.519 sys 1m34.016s 00:19:15.519 15:46:04 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.519 ************************************ 00:19:15.519 END TEST nvme_xnvme 00:19:15.519 ************************************ 00:19:15.519 15:46:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.777 15:46:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:15.777 15:46:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.777 15:46:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.777 15:46:04 -- common/autotest_common.sh@10 -- # set +x 00:19:15.777 ************************************ 00:19:15.777 START TEST blockdev_xnvme 00:19:15.777 ************************************ 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:15.777 * Looking for test storage... 00:19:15.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.777 15:46:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:15.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.777 --rc genhtml_branch_coverage=1 00:19:15.777 --rc genhtml_function_coverage=1 00:19:15.777 --rc genhtml_legend=1 00:19:15.777 --rc geninfo_all_blocks=1 00:19:15.777 --rc geninfo_unexecuted_blocks=1 00:19:15.777 00:19:15.777 ' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:15.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.777 --rc genhtml_branch_coverage=1 00:19:15.777 --rc genhtml_function_coverage=1 00:19:15.777 --rc genhtml_legend=1 00:19:15.777 --rc geninfo_all_blocks=1 00:19:15.777 --rc geninfo_unexecuted_blocks=1 00:19:15.777 00:19:15.777 ' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:15.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.777 --rc genhtml_branch_coverage=1 00:19:15.777 --rc genhtml_function_coverage=1 00:19:15.777 --rc genhtml_legend=1 00:19:15.777 --rc geninfo_all_blocks=1 00:19:15.777 --rc geninfo_unexecuted_blocks=1 00:19:15.777 00:19:15.777 ' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:15.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.777 --rc genhtml_branch_coverage=1 00:19:15.777 --rc genhtml_function_coverage=1 00:19:15.777 --rc genhtml_legend=1 00:19:15.777 --rc geninfo_all_blocks=1 00:19:15.777 --rc geninfo_unexecuted_blocks=1 00:19:15.777 00:19:15.777 ' 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84665 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84665 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 84665 ']' 00:19:15.777 15:46:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.777 15:46:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.034 [2024-12-06 15:46:04.558872] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:16.034 [2024-12-06 15:46:04.559110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84665 ] 00:19:16.034 [2024-12-06 15:46:04.720732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.292 [2024-12-06 15:46:04.769234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.860 15:46:05 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.860 15:46:05 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:16.860 15:46:05 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:16.860 15:46:05 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:16.860 15:46:05 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:16.860 15:46:05 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:16.860 15:46:05 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:17.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:17.992 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:17.992 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:17.992 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:17.992 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:17.992 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:17.992 15:46:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:17.992 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.992 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:17.992 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:17.993 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.993 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.993 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:17.993 nvme0n1 00:19:17.993 nvme0n2 00:19:17.993 nvme0n3 00:19:17.993 nvme1n1 00:19:18.251 nvme2n1 00:19:18.251 nvme3n1 00:19:18.251 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.251 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:18.251 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.251 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "43dfd401-9278-4248-8bf8-63aa91dc8b3f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43dfd401-9278-4248-8bf8-63aa91dc8b3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "53be6870-2b0a-46ea-82f0-da614298f572"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53be6870-2b0a-46ea-82f0-da614298f572",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a4200034-702a-42e9-bc7f-1fe8f7cce2aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4200034-702a-42e9-bc7f-1fe8f7cce2aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "16f2dbed-7f35-4148-818a-daa74baa96fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "16f2dbed-7f35-4148-818a-daa74baa96fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "95af3fe9-5be8-436a-9c03-e262c47fdc97"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "95af3fe9-5be8-436a-9c03-e262c47fdc97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "604d1a03-51b8-4506-84e5-7feaa06d5cc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "604d1a03-51b8-4506-84e5-7feaa06d5cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:18.252 15:46:06 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 84665 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84665 ']' 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 84665 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84665 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.252 killing process with pid 84665 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84665' 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 84665 00:19:18.252 15:46:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 84665 00:19:18.820 15:46:07 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:18.820 15:46:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:18.820 15:46:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:18.820 15:46:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.820 15:46:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.820 ************************************ 00:19:18.820 START TEST bdev_hello_world 00:19:18.820 ************************************ 00:19:18.821 15:46:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:19.080 [2024-12-06 15:46:07.557263] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:19.080 [2024-12-06 15:46:07.557477] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84938 ] 00:19:19.080 [2024-12-06 15:46:07.712012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.080 [2024-12-06 15:46:07.760247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.339 [2024-12-06 15:46:07.983234] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:19.339 [2024-12-06 15:46:07.983298] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:19.339 [2024-12-06 15:46:07.983327] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:19.339 [2024-12-06 15:46:07.985648] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:19.339 [2024-12-06 15:46:07.986041] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:19.339 [2024-12-06 15:46:07.986074] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:19.339 [2024-12-06 15:46:07.986279] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:19.339 00:19:19.339 [2024-12-06 15:46:07.986327] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:19.598 00:19:19.598 real 0m0.783s 00:19:19.598 user 0m0.441s 00:19:19.598 sys 0m0.230s 00:19:19.598 15:46:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.598 ************************************ 00:19:19.598 END TEST bdev_hello_world 00:19:19.598 ************************************ 00:19:19.598 15:46:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:19.598 15:46:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:19.598 15:46:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.598 15:46:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.598 15:46:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.858 ************************************ 00:19:19.858 START TEST bdev_bounds 00:19:19.858 ************************************ 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=84969 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:19.858 Process bdevio pid: 84969 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 84969' 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 84969 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 84969 ']' 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.858 15:46:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:19.858 [2024-12-06 15:46:08.374500] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:19.858 [2024-12-06 15:46:08.374683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84969 ] 00:19:19.858 [2024-12-06 15:46:08.516538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.117 [2024-12-06 15:46:08.561366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.117 [2024-12-06 15:46:08.561475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.117 [2024-12-06 15:46:08.561531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.684 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.684 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:20.684 15:46:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:20.944 I/O targets: 00:19:20.944 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.944 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.944 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.944 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:20.944 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:20.944 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:20.944 00:19:20.944 00:19:20.944 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.944 http://cunit.sourceforge.net/ 00:19:20.944 00:19:20.944 00:19:20.944 Suite: bdevio tests on: nvme3n1 00:19:20.944 Test: blockdev write read block ...passed 00:19:20.944 Test: blockdev write zeroes read block ...passed 00:19:20.944 Test: blockdev write zeroes read no split ...passed 00:19:20.944 Test: blockdev write zeroes read split ...passed 00:19:20.944 Test: blockdev write zeroes read split partial ...passed 00:19:20.944 Test: blockdev reset ...passed 00:19:20.944 Test: blockdev write read 8 blocks ...passed 00:19:20.944 Test: blockdev write read size > 128k ...passed 00:19:20.944 Test: blockdev write read invalid size ...passed 00:19:20.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.944 Test: blockdev write read max offset ...passed 00:19:20.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.944 Test: blockdev writev readv 8 blocks ...passed 00:19:20.944 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.944 Test: blockdev writev readv block ...passed 00:19:20.944 Test: blockdev writev readv size > 128k ...passed 00:19:20.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.944 Test: blockdev comparev and writev ...passed 00:19:20.944 Test: blockdev nvme passthru rw ...passed 00:19:20.944 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.944 Test: blockdev nvme admin passthru ...passed 00:19:20.944 Test: blockdev copy ...passed 00:19:20.944 Suite: bdevio tests on: nvme2n1 00:19:20.944 Test: blockdev write read block ...passed 00:19:20.944 Test: blockdev write zeroes read block ...passed 00:19:20.944 Test: blockdev write zeroes read no split ...passed 00:19:20.944 Test: blockdev write zeroes read split ...passed 00:19:20.944 Test: blockdev write zeroes read split partial ...passed 00:19:20.944 Test: blockdev reset ...passed 00:19:20.944 Test: blockdev write read 8 blocks ...passed 00:19:20.944 Test: blockdev write read size > 128k ...passed 00:19:20.944 Test: blockdev write read invalid size ...passed 00:19:20.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.944 Test: blockdev write read max offset ...passed 00:19:20.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.944 Test: blockdev writev readv 8 blocks ...passed 00:19:20.944 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.944 Test: blockdev writev readv block ...passed 00:19:20.944 Test: blockdev writev readv size > 128k ...passed 00:19:20.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.944 Test: blockdev comparev and writev ...passed 00:19:20.944 Test: blockdev nvme passthru rw ...passed 00:19:20.944 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.944 Test: blockdev nvme admin passthru ...passed 00:19:20.944 Test: blockdev copy ...passed 00:19:20.944 Suite: bdevio tests on: nvme1n1 00:19:20.944 Test: blockdev write read block ...passed 00:19:20.944 Test: blockdev write zeroes read block ...passed 00:19:20.944 Test: blockdev write zeroes read no split ...passed 00:19:20.944 Test: blockdev write zeroes read split ...passed 00:19:20.944 Test: blockdev write zeroes read split partial ...passed 00:19:20.944 Test: blockdev reset ...passed 00:19:20.944 Test: blockdev write read 8 blocks ...passed 00:19:20.944 Test: blockdev write read size > 128k ...passed 00:19:20.944 Test: blockdev write read invalid size ...passed 00:19:20.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.944 Test: blockdev write read max offset ...passed 00:19:20.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.944 Test: blockdev writev readv 8 blocks ...passed 00:19:20.944 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.944 Test: blockdev writev readv block ...passed 00:19:20.944 Test: blockdev writev readv size > 128k ...passed 00:19:20.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.944 Test: blockdev comparev and writev ...passed 00:19:20.944 Test: blockdev nvme passthru rw ...passed 00:19:20.944 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.944 Test: blockdev nvme admin passthru ...passed 00:19:20.944 Test: blockdev copy ...passed 00:19:20.944 Suite: bdevio tests on: nvme0n3 00:19:20.944 Test: blockdev write read block ...passed 00:19:20.944 Test: blockdev write zeroes read block ...passed 00:19:20.944 Test: blockdev write zeroes read no split ...passed 00:19:20.944 Test: blockdev write zeroes read split ...passed 00:19:20.944 Test: blockdev write zeroes read split partial ...passed 00:19:20.945 Test: blockdev reset ...passed 00:19:20.945 Test: blockdev write read 8 blocks ...passed 00:19:20.945 Test: blockdev write read size > 128k ...passed 00:19:20.945 Test: blockdev write read invalid size ...passed 00:19:20.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.945 Test: blockdev write read max offset ...passed 00:19:20.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.945 Test: blockdev writev readv 8 blocks ...passed 00:19:20.945 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.945 Test: blockdev writev readv block ...passed 00:19:20.945 Test: blockdev writev readv size > 128k ...passed 00:19:20.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.945 Test: blockdev comparev and writev ...passed 00:19:20.945 Test: blockdev nvme passthru rw ...passed 00:19:20.945 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.945 Test: blockdev nvme admin passthru ...passed 00:19:20.945 Test: blockdev copy ...passed 00:19:20.945 Suite: bdevio tests on: nvme0n2 00:19:20.945 Test: blockdev write read block ...passed 00:19:20.945 Test: blockdev write zeroes read block ...passed 00:19:20.945 Test: blockdev write zeroes read no split ...passed 00:19:20.945 Test: blockdev write zeroes read split ...passed 00:19:20.945 Test: blockdev write zeroes read split partial ...passed 00:19:20.945 Test: blockdev reset ...passed 00:19:20.945 Test: blockdev write read 8 blocks ...passed 00:19:20.945 Test: blockdev write read size > 128k ...passed 00:19:20.945 Test: blockdev write read invalid size ...passed 00:19:20.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.945 Test: blockdev write read max offset ...passed 00:19:20.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.945 Test: blockdev writev readv 8 blocks ...passed 00:19:20.945 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.945 Test: blockdev writev readv block ...passed 00:19:20.945 Test: blockdev writev readv size > 128k ...passed 00:19:20.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.945 Test: blockdev comparev and writev ...passed 00:19:20.945 Test: blockdev nvme passthru rw ...passed 00:19:20.945 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.945 Test: blockdev nvme admin passthru ...passed 00:19:20.945 Test: blockdev copy ...passed 00:19:20.945 Suite: bdevio tests on: nvme0n1 00:19:20.945 Test: blockdev write read block ...passed 00:19:20.945 Test: blockdev write zeroes read block ...passed 00:19:20.945 Test: blockdev write zeroes read no split ...passed 00:19:20.945 Test: blockdev write zeroes read split ...passed 00:19:20.945 Test: blockdev write zeroes read split partial ...passed 00:19:20.945 Test: blockdev reset ...passed 00:19:20.945 Test: blockdev write read 8 blocks ...passed 00:19:20.945 Test: blockdev write read size > 128k ...passed 00:19:20.945 Test: blockdev write read invalid size ...passed 00:19:20.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.945 Test: blockdev write read max offset ...passed 00:19:20.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.945 Test: blockdev writev readv 8 blocks ...passed 00:19:20.945 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.945 Test: blockdev writev readv block ...passed 00:19:20.945 Test: blockdev writev readv size > 128k ...passed 00:19:20.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.945 Test: blockdev comparev and writev ...passed 00:19:20.945 Test: blockdev nvme passthru rw ...passed 00:19:20.945 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.945 Test: blockdev nvme admin passthru ...passed 00:19:20.945 Test: blockdev copy ...passed 00:19:20.945 00:19:20.945 Run Summary: Type Total Ran Passed Failed Inactive 00:19:20.945 suites 6 6 n/a 0 0 00:19:20.945 tests 138 138 138 0 0 00:19:20.945 asserts 780 780 780 0 n/a 00:19:20.945 00:19:20.945 Elapsed time = 0.364 seconds 00:19:20.945 0 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 84969 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 84969 ']' 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 84969 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.945 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84969 00:19:21.204 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.204 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.204 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84969' 00:19:21.204 killing process with pid 84969 00:19:21.204 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 84969 00:19:21.204 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 84969 00:19:21.462 15:46:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:21.462 00:19:21.462 real 0m1.654s 00:19:21.462 user 0m4.231s 00:19:21.462 sys 0m0.406s 00:19:21.462 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.462 15:46:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:21.462 ************************************ 00:19:21.462 END TEST bdev_bounds 00:19:21.462 ************************************ 00:19:21.462 15:46:09 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:21.462 15:46:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:21.462 15:46:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.462 15:46:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.462 ************************************ 00:19:21.462 START TEST bdev_nbd 00:19:21.462 ************************************ 00:19:21.462 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:21.462 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:21.462 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:21.462 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:21.462 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85024 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85024 /var/tmp/spdk-nbd.sock 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85024 ']' 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.463 15:46:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:21.463 [2024-12-06 15:46:10.093642] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:21.463 [2024-12-06 15:46:10.093821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.721 [2024-12-06 15:46:10.235347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.721 [2024-12-06 15:46:10.275175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:22.657 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:22.916 1+0 records in 00:19:22.916 1+0 records out 00:19:22.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592246 s, 6.9 MB/s 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:22.916 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.175 1+0 records in 00:19:23.175 1+0 records out 00:19:23.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522607 s, 7.8 MB/s 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.175 15:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.434 1+0 records in 00:19:23.434 1+0 records out 00:19:23.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705804 s, 5.8 MB/s 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.434 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.435 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.435 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.693 1+0 records in 00:19:23.693 1+0 records out 00:19:23.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743546 s, 5.5 MB/s 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.693 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.694 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.694 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.694 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.694 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.952 1+0 records in 00:19:23.952 1+0 records out 00:19:23.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737926 s, 5.6 MB/s 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.952 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:24.211 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.212 1+0 records in 00:19:24.212 1+0 records out 00:19:24.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114065 s, 3.6 MB/s 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.212 15:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd0", 00:19:24.471 "bdev_name": "nvme0n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd1", 00:19:24.471 "bdev_name": "nvme0n2" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd2", 00:19:24.471 "bdev_name": "nvme0n3" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd3", 00:19:24.471 "bdev_name": "nvme1n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd4", 00:19:24.471 "bdev_name": "nvme2n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd5", 00:19:24.471 "bdev_name": "nvme3n1" 00:19:24.471 } 00:19:24.471 ]' 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd0", 00:19:24.471 "bdev_name": "nvme0n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd1", 00:19:24.471 "bdev_name": "nvme0n2" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd2", 00:19:24.471 "bdev_name": "nvme0n3" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd3", 00:19:24.471 "bdev_name": "nvme1n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd4", 00:19:24.471 "bdev_name": "nvme2n1" 00:19:24.471 }, 00:19:24.471 { 00:19:24.471 "nbd_device": "/dev/nbd5", 00:19:24.471 "bdev_name": "nvme3n1" 00:19:24.471 } 00:19:24.471 ]' 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.471 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.730 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.988 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.248 15:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.507 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.766 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.025 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.285 15:46:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:26.543 /dev/nbd0 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.543 1+0 records in 00:19:26.543 1+0 records out 00:19:26.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549665 s, 7.5 MB/s 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.543 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:27.108 /dev/nbd1 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.108 1+0 records in 00:19:27.108 1+0 records out 00:19:27.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554658 s, 7.4 MB/s 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.108 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:27.108 /dev/nbd10 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.109 1+0 records in 00:19:27.109 1+0 records out 00:19:27.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834962 s, 4.9 MB/s 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.109 15:46:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:27.366 /dev/nbd11 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.366 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.367 1+0 records in 00:19:27.367 1+0 records out 00:19:27.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538351 s, 7.6 MB/s 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.367 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:27.625 /dev/nbd12 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.883 1+0 records in 00:19:27.883 1+0 records out 00:19:27.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704693 s, 5.8 MB/s 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.883 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:28.140 /dev/nbd13 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.140 1+0 records in 00:19:28.140 1+0 records out 00:19:28.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849761 s, 4.8 MB/s 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.140 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd0", 00:19:28.397 "bdev_name": "nvme0n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd1", 00:19:28.397 "bdev_name": "nvme0n2" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd10", 00:19:28.397 "bdev_name": "nvme0n3" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd11", 00:19:28.397 "bdev_name": "nvme1n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd12", 00:19:28.397 "bdev_name": "nvme2n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd13", 00:19:28.397 "bdev_name": "nvme3n1" 00:19:28.397 } 00:19:28.397 ]' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd0", 00:19:28.397 "bdev_name": "nvme0n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd1", 00:19:28.397 "bdev_name": "nvme0n2" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd10", 00:19:28.397 "bdev_name": "nvme0n3" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd11", 00:19:28.397 "bdev_name": "nvme1n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd12", 00:19:28.397 "bdev_name": "nvme2n1" 00:19:28.397 }, 00:19:28.397 { 00:19:28.397 "nbd_device": "/dev/nbd13", 00:19:28.397 "bdev_name": "nvme3n1" 00:19:28.397 } 00:19:28.397 ]' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:28.397 /dev/nbd1 00:19:28.397 /dev/nbd10 00:19:28.397 /dev/nbd11 00:19:28.397 /dev/nbd12 00:19:28.397 /dev/nbd13' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:28.397 /dev/nbd1 00:19:28.397 /dev/nbd10 00:19:28.397 /dev/nbd11 00:19:28.397 /dev/nbd12 00:19:28.397 /dev/nbd13' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:28.397 256+0 records in 00:19:28.397 256+0 records out 00:19:28.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789667 s, 133 MB/s 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.397 15:46:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:28.655 256+0 records in 00:19:28.655 256+0 records out 00:19:28.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159537 s, 6.6 MB/s 00:19:28.655 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.655 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:28.655 256+0 records in 00:19:28.655 256+0 records out 00:19:28.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163405 s, 6.4 MB/s 00:19:28.655 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.655 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:28.912 256+0 records in 00:19:28.912 256+0 records out 00:19:28.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159852 s, 6.6 MB/s 00:19:28.912 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.912 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:29.170 256+0 records in 00:19:29.170 256+0 records out 00:19:29.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159709 s, 6.6 MB/s 00:19:29.170 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.170 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:29.170 256+0 records in 00:19:29.170 256+0 records out 00:19:29.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161236 s, 6.5 MB/s 00:19:29.170 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.170 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:29.428 256+0 records in 00:19:29.428 256+0 records out 00:19:29.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160731 s, 6.5 MB/s 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.428 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.686 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.256 15:46:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.822 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.081 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:31.340 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:31.340 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:31.340 15:46:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:31.340 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:31.598 malloc_lvol_verify 00:19:31.857 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:32.115 5da00db1-1394-4db6-9fbe-54635ca50911 00:19:32.115 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:32.374 77fa7b78-8cb2-4d4c-aead-b4531e562388 00:19:32.374 15:46:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:32.374 /dev/nbd0 00:19:32.374 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:32.374 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:32.374 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:32.374 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:32.375 mke2fs 1.47.0 (5-Feb-2023) 00:19:32.375 Discarding device blocks: 0/4096 done 00:19:32.375 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:32.375 00:19:32.375 Allocating group tables: 0/1 done 00:19:32.375 Writing inode tables: 0/1 done 00:19:32.375 Creating journal (1024 blocks): done 00:19:32.375 Writing superblocks and filesystem accounting information: 0/1 done 00:19:32.375 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.375 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:32.633 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.633 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85024 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85024 ']' 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85024 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85024 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.634 killing process with pid 85024 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85024' 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85024 00:19:32.634 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85024 00:19:33.201 15:46:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:33.201 00:19:33.201 real 0m11.607s 00:19:33.201 user 0m16.574s 00:19:33.201 sys 0m4.087s 00:19:33.201 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.201 15:46:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:33.201 ************************************ 00:19:33.201 END TEST bdev_nbd 00:19:33.201 ************************************ 00:19:33.201 15:46:21 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:33.201 15:46:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:33.201 15:46:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:33.201 15:46:21 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:33.201 15:46:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.201 15:46:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.201 15:46:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:33.201 ************************************ 00:19:33.201 START TEST bdev_fio 00:19:33.201 ************************************ 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:33.201 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:33.201 ************************************ 00:19:33.201 START TEST bdev_fio_rw_verify 00:19:33.201 ************************************ 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.201 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.202 15:46:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.459 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.459 fio-3.35 00:19:33.459 Starting 6 threads 00:19:45.662 00:19:45.662 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85442: Fri Dec 6 15:46:32 2024 00:19:45.662 read: IOPS=27.2k, BW=106MiB/s (111MB/s)(1062MiB/10001msec) 00:19:45.662 slat (usec): min=2, max=4061, avg=10.43, stdev=12.64 00:19:45.663 clat (usec): min=110, max=590915, avg=651.43, stdev=3226.75 00:19:45.663 lat (usec): min=117, max=590927, avg=661.85, stdev=3226.92 00:19:45.663 clat percentiles (usec): 00:19:45.663 | 50.000th=[ 603], 99.000th=[ 1336], 99.900th=[ 3294], 00:19:45.663 | 99.990th=[ 9503], 99.999th=[591397] 00:19:45.663 write: IOPS=27.5k, BW=107MiB/s (113MB/s)(1073MiB/10001msec); 0 zone resets 00:19:45.663 slat (usec): min=11, max=4124, avg=34.08, stdev=47.12 00:19:45.663 clat (usec): min=102, max=5677, avg=785.93, stdev=309.65 00:19:45.663 lat (usec): min=117, max=5715, avg=820.01, stdev=314.73 00:19:45.663 clat percentiles (usec): 00:19:45.663 | 50.000th=[ 766], 99.000th=[ 1647], 99.900th=[ 2671], 99.990th=[ 4817], 00:19:45.663 | 99.999th=[ 5342] 00:19:45.663 bw ( KiB/s): min=93718, max=134759, per=100.00%, avg=111066.84, stdev=1972.59, samples=113 00:19:45.663 iops : min=23428, max=33689, avg=27766.10, stdev=493.10, samples=113 00:19:45.663 lat (usec) : 250=2.48%, 500=23.50%, 750=32.78%, 1000=26.60% 00:19:45.663 lat (msec) : 2=14.39%, 4=0.20%, 10=0.04%, 20=0.01%, 50=0.01% 00:19:45.663 lat (msec) : 750=0.01% 00:19:45.663 cpu : usr=48.49%, sys=33.80%, ctx=7446, majf=0, minf=25224 00:19:45.663 IO depths : 1=11.2%, 2=23.5%, 4=51.3%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.663 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.663 issued rwts: total=271794,274718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.663 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.663 00:19:45.663 Run status group 0 (all jobs): 00:19:45.663 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1062MiB (1113MB), run=10001-10001msec 00:19:45.663 WRITE: bw=107MiB/s (113MB/s), 107MiB/s-107MiB/s (113MB/s-113MB/s), io=1073MiB (1125MB), run=10001-10001msec 00:19:45.663 ----------------------------------------------------- 00:19:45.663 Suppressions used: 00:19:45.663 count bytes template 00:19:45.663 6 48 /usr/src/fio/parse.c 00:19:45.663 2742 263232 /usr/src/fio/iolog.c 00:19:45.663 1 8 libtcmalloc_minimal.so 00:19:45.663 1 904 libcrypto.so 00:19:45.663 ----------------------------------------------------- 00:19:45.663 00:19:45.663 00:19:45.663 real 0m11.333s 00:19:45.663 user 0m29.932s 00:19:45.663 sys 0m20.678s 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:45.663 ************************************ 00:19:45.663 END TEST bdev_fio_rw_verify 00:19:45.663 ************************************ 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "43dfd401-9278-4248-8bf8-63aa91dc8b3f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43dfd401-9278-4248-8bf8-63aa91dc8b3f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "53be6870-2b0a-46ea-82f0-da614298f572"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53be6870-2b0a-46ea-82f0-da614298f572",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a4200034-702a-42e9-bc7f-1fe8f7cce2aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4200034-702a-42e9-bc7f-1fe8f7cce2aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "16f2dbed-7f35-4148-818a-daa74baa96fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "16f2dbed-7f35-4148-818a-daa74baa96fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "95af3fe9-5be8-436a-9c03-e262c47fdc97"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "95af3fe9-5be8-436a-9c03-e262c47fdc97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "604d1a03-51b8-4506-84e5-7feaa06d5cc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "604d1a03-51b8-4506-84e5-7feaa06d5cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:45.663 /home/vagrant/spdk_repo/spdk 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:45.663 00:19:45.663 real 0m11.543s 00:19:45.663 user 0m30.047s 00:19:45.663 sys 0m20.772s 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.663 15:46:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:45.663 ************************************ 00:19:45.663 END TEST bdev_fio 00:19:45.664 ************************************ 00:19:45.664 15:46:33 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:45.664 15:46:33 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:45.664 15:46:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:45.664 15:46:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.664 15:46:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:45.664 ************************************ 00:19:45.664 START TEST bdev_verify 00:19:45.664 ************************************ 00:19:45.664 15:46:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:45.664 [2024-12-06 15:46:33.350572] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:45.664 [2024-12-06 15:46:33.350721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85605 ] 00:19:45.664 [2024-12-06 15:46:33.509535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:45.664 [2024-12-06 15:46:33.556431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.664 [2024-12-06 15:46:33.556483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.664 Running I/O for 5 seconds... 00:19:47.532 24928.00 IOPS, 97.38 MiB/s [2024-12-06T15:46:37.160Z] 24151.00 IOPS, 94.34 MiB/s [2024-12-06T15:46:38.097Z] 24495.67 IOPS, 95.69 MiB/s [2024-12-06T15:46:39.033Z] 24883.75 IOPS, 97.20 MiB/s [2024-12-06T15:46:39.033Z] 25244.60 IOPS, 98.61 MiB/s 00:19:50.340 Latency(us) 00:19:50.340 [2024-12-06T15:46:39.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.340 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0x80000 00:19:50.340 nvme0n1 : 5.02 1860.85 7.27 0.00 0.00 68682.72 10247.45 69587.32 00:19:50.340 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x80000 length 0x80000 00:19:50.340 nvme0n1 : 5.06 1770.65 6.92 0.00 0.00 72175.20 15609.48 81026.33 00:19:50.340 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0x80000 00:19:50.340 nvme0n2 : 5.03 1857.05 7.25 0.00 0.00 68709.71 12630.57 58386.62 00:19:50.340 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x80000 length 0x80000 00:19:50.340 nvme0n2 : 5.05 1774.56 6.93 0.00 0.00 71902.58 12153.95 73876.95 00:19:50.340 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0x80000 00:19:50.340 nvme0n3 : 5.03 1856.53 7.25 0.00 0.00 68626.90 15192.44 55526.87 00:19:50.340 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x80000 length 0x80000 00:19:50.340 nvme0n3 : 5.07 1767.07 6.90 0.00 0.00 72104.04 13226.36 70063.94 00:19:50.340 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0x20000 00:19:50.340 nvme1n1 : 5.07 1869.04 7.30 0.00 0.00 68065.51 7923.90 64344.44 00:19:50.340 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x20000 length 0x20000 00:19:50.340 nvme1n1 : 5.07 1766.42 6.90 0.00 0.00 72029.91 17039.36 62914.56 00:19:50.340 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0xa0000 00:19:50.340 nvme2n1 : 5.06 1870.63 7.31 0.00 0.00 67904.07 2770.39 70063.94 00:19:50.340 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0xa0000 length 0xa0000 00:19:50.340 nvme2n1 : 5.07 1765.79 6.90 0.00 0.00 71953.79 15490.33 70540.57 00:19:50.340 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0x0 length 0xbd0bd 00:19:50.340 nvme3n1 : 5.06 3475.38 13.58 0.00 0.00 36445.29 3202.33 63391.19 00:19:50.340 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.340 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:50.340 nvme3n1 : 5.08 3369.57 13.16 0.00 0.00 37555.53 4319.42 62914.56 00:19:50.340 [2024-12-06T15:46:39.033Z] =================================================================================================================== 00:19:50.340 [2024-12-06T15:46:39.033Z] Total : 25003.53 97.67 0.00 0.00 61067.68 2770.39 81026.33 00:19:50.599 00:19:50.599 real 0m5.962s 00:19:50.599 user 0m8.932s 00:19:50.599 sys 0m2.040s 00:19:50.599 15:46:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.599 15:46:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:50.599 ************************************ 00:19:50.599 END TEST bdev_verify 00:19:50.599 ************************************ 00:19:50.599 15:46:39 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.599 15:46:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:50.599 15:46:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.599 15:46:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:50.599 ************************************ 00:19:50.599 START TEST bdev_verify_big_io 00:19:50.599 ************************************ 00:19:50.599 15:46:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.858 [2024-12-06 15:46:39.390311] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:50.858 [2024-12-06 15:46:39.390518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85694 ] 00:19:50.858 [2024-12-06 15:46:39.547324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.122 [2024-12-06 15:46:39.599336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.122 [2024-12-06 15:46:39.599404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.387 Running I/O for 5 seconds... 00:19:56.575 1728.00 IOPS, 108.00 MiB/s [2024-12-06T15:46:45.834Z] 3753.00 IOPS, 234.56 MiB/s [2024-12-06T15:46:45.834Z] 4094.33 IOPS, 255.90 MiB/s 00:19:57.141 Latency(us) 00:19:57.141 [2024-12-06T15:46:45.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.141 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0x8000 00:19:57.141 nvme0n1 : 5.44 173.54 10.85 0.00 0.00 707009.74 74353.57 899868.86 00:19:57.141 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x8000 length 0x8000 00:19:57.141 nvme0n1 : 5.72 142.58 8.91 0.00 0.00 873575.02 102951.10 846486.81 00:19:57.141 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0x8000 00:19:57.141 nvme0n2 : 5.65 178.51 11.16 0.00 0.00 680977.20 114866.73 1281169.22 00:19:57.141 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x8000 length 0x8000 00:19:57.141 nvme0n2 : 5.77 156.67 9.79 0.00 0.00 780091.70 8102.63 926559.88 00:19:57.141 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0x8000 00:19:57.141 nvme0n3 : 5.64 190.02 11.88 0.00 0.00 618419.97 80549.70 1014258.97 00:19:57.141 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x8000 length 0x8000 00:19:57.141 nvme0n3 : 5.77 152.45 9.53 0.00 0.00 783583.08 42896.29 1151527.10 00:19:57.141 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0x2000 00:19:57.141 nvme1n1 : 5.68 219.62 13.73 0.00 0.00 522621.15 36223.53 1075267.03 00:19:57.141 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x2000 length 0x2000 00:19:57.141 nvme1n1 : 5.77 166.24 10.39 0.00 0.00 699396.22 43134.60 743535.71 00:19:57.141 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0xa000 00:19:57.141 nvme2n1 : 5.74 209.16 13.07 0.00 0.00 539013.94 71970.44 1410811.35 00:19:57.141 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0xa000 length 0xa000 00:19:57.141 nvme2n1 : 5.76 128.21 8.01 0.00 0.00 877558.40 100567.97 2394566.28 00:19:57.141 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0x0 length 0xbd0b 00:19:57.141 nvme3n1 : 5.75 261.46 16.34 0.00 0.00 422543.03 385.40 575763.55 00:19:57.141 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.141 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:57.141 nvme3n1 : 5.79 176.96 11.06 0.00 0.00 623991.04 7983.48 678714.65 00:19:57.141 [2024-12-06T15:46:45.834Z] =================================================================================================================== 00:19:57.141 [2024-12-06T15:46:45.834Z] Total : 2155.42 134.71 0.00 0.00 651909.68 385.40 2394566.28 00:19:57.400 00:19:57.400 real 0m6.683s 00:19:57.400 user 0m12.023s 00:19:57.400 sys 0m0.649s 00:19:57.400 15:46:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.400 15:46:45 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.400 ************************************ 00:19:57.400 END TEST bdev_verify_big_io 00:19:57.400 ************************************ 00:19:57.400 15:46:46 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.400 15:46:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:57.400 15:46:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.400 15:46:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.400 ************************************ 00:19:57.400 START TEST bdev_write_zeroes 00:19:57.400 ************************************ 00:19:57.400 15:46:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.658 [2024-12-06 15:46:46.128718] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:57.658 [2024-12-06 15:46:46.128932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85794 ] 00:19:57.658 [2024-12-06 15:46:46.284334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.659 [2024-12-06 15:46:46.331159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.918 Running I/O for 1 seconds... 00:19:59.298 73312.00 IOPS, 286.38 MiB/s 00:19:59.298 Latency(us) 00:19:59.298 [2024-12-06T15:46:47.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.298 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme0n1 : 1.02 10669.38 41.68 0.00 0.00 11987.12 7626.01 17158.52 00:19:59.298 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme0n2 : 1.02 10657.17 41.63 0.00 0.00 11991.63 7685.59 16324.42 00:19:59.298 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme0n3 : 1.02 10647.15 41.59 0.00 0.00 11994.16 7685.59 15073.28 00:19:59.298 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme1n1 : 1.02 10637.76 41.55 0.00 0.00 11995.83 7685.59 15490.33 00:19:59.298 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme2n1 : 1.02 10628.37 41.52 0.00 0.00 11997.15 7685.59 16801.05 00:19:59.298 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.298 nvme3n1 : 1.02 19961.17 77.97 0.00 0.00 6357.43 2859.75 17396.83 00:19:59.298 [2024-12-06T15:46:47.991Z] =================================================================================================================== 00:19:59.298 [2024-12-06T15:46:47.991Z] Total : 73200.99 285.94 0.00 0.00 10453.53 2859.75 17396.83 00:19:59.298 00:19:59.298 real 0m1.823s 00:19:59.298 user 0m0.970s 00:19:59.298 sys 0m0.681s 00:19:59.298 15:46:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.298 15:46:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:59.298 ************************************ 00:19:59.298 END TEST bdev_write_zeroes 00:19:59.298 ************************************ 00:19:59.298 15:46:47 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.298 15:46:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:59.298 15:46:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.298 15:46:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.298 ************************************ 00:19:59.298 START TEST bdev_json_nonenclosed 00:19:59.298 ************************************ 00:19:59.298 15:46:47 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.558 [2024-12-06 15:46:48.008389] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:59.558 [2024-12-06 15:46:48.008584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85831 ] 00:19:59.558 [2024-12-06 15:46:48.162219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.558 [2024-12-06 15:46:48.195269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.558 [2024-12-06 15:46:48.195407] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:59.558 [2024-12-06 15:46:48.195432] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:59.558 [2024-12-06 15:46:48.195449] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:59.818 00:19:59.818 real 0m0.371s 00:19:59.818 user 0m0.151s 00:19:59.818 sys 0m0.117s 00:19:59.818 15:46:48 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.818 15:46:48 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:59.818 ************************************ 00:19:59.818 END TEST bdev_json_nonenclosed 00:19:59.818 ************************************ 00:19:59.818 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.818 15:46:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:59.818 15:46:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.818 15:46:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.818 ************************************ 00:19:59.818 START TEST bdev_json_nonarray 00:19:59.818 ************************************ 00:19:59.818 15:46:48 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.818 [2024-12-06 15:46:48.431028] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:59.818 [2024-12-06 15:46:48.431236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85856 ] 00:20:00.077 [2024-12-06 15:46:48.586928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.077 [2024-12-06 15:46:48.621048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.078 [2024-12-06 15:46:48.621166] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:00.078 [2024-12-06 15:46:48.621192] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:00.078 [2024-12-06 15:46:48.621209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:00.078 00:20:00.078 real 0m0.372s 00:20:00.078 user 0m0.154s 00:20:00.078 sys 0m0.114s 00:20:00.078 15:46:48 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.078 ************************************ 00:20:00.078 END TEST bdev_json_nonarray 00:20:00.078 15:46:48 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:00.078 ************************************ 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:00.078 15:46:48 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:00.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.208 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.208 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.208 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.208 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.208 ************************************ 00:20:07.208 END TEST blockdev_xnvme 00:20:07.208 ************************************ 00:20:07.208 00:20:07.208 real 0m51.301s 00:20:07.208 user 1m17.924s 00:20:07.208 sys 0m47.254s 00:20:07.208 15:46:55 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.208 15:46:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:07.208 15:46:55 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:07.208 15:46:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.208 15:46:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.208 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:20:07.208 ************************************ 00:20:07.208 START TEST ublk 00:20:07.208 ************************************ 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:07.208 * Looking for test storage... 00:20:07.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.208 15:46:55 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.208 15:46:55 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.208 15:46:55 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.208 15:46:55 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.208 15:46:55 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.208 15:46:55 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:07.208 15:46:55 ublk -- scripts/common.sh@345 -- # : 1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.208 15:46:55 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.208 15:46:55 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@353 -- # local d=1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.208 15:46:55 ublk -- scripts/common.sh@355 -- # echo 1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.208 15:46:55 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@353 -- # local d=2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.208 15:46:55 ublk -- scripts/common.sh@355 -- # echo 2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.208 15:46:55 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.208 15:46:55 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.208 15:46:55 ublk -- scripts/common.sh@368 -- # return 0 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.208 --rc genhtml_branch_coverage=1 00:20:07.208 --rc genhtml_function_coverage=1 00:20:07.208 --rc genhtml_legend=1 00:20:07.208 --rc geninfo_all_blocks=1 00:20:07.208 --rc geninfo_unexecuted_blocks=1 00:20:07.208 00:20:07.208 ' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.208 --rc genhtml_branch_coverage=1 00:20:07.208 --rc genhtml_function_coverage=1 00:20:07.208 --rc genhtml_legend=1 00:20:07.208 --rc geninfo_all_blocks=1 00:20:07.208 --rc geninfo_unexecuted_blocks=1 00:20:07.208 00:20:07.208 ' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.208 --rc genhtml_branch_coverage=1 00:20:07.208 --rc genhtml_function_coverage=1 00:20:07.208 --rc genhtml_legend=1 00:20:07.208 --rc geninfo_all_blocks=1 00:20:07.208 --rc geninfo_unexecuted_blocks=1 00:20:07.208 00:20:07.208 ' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.208 --rc genhtml_branch_coverage=1 00:20:07.208 --rc genhtml_function_coverage=1 00:20:07.208 --rc genhtml_legend=1 00:20:07.208 --rc geninfo_all_blocks=1 00:20:07.208 --rc geninfo_unexecuted_blocks=1 00:20:07.208 00:20:07.208 ' 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:07.208 15:46:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:07.208 15:46:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:07.208 15:46:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:07.208 15:46:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:07.208 15:46:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:07.208 15:46:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:07.208 15:46:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:07.208 15:46:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:07.208 15:46:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.208 15:46:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:07.208 ************************************ 00:20:07.208 START TEST test_save_ublk_config 00:20:07.208 ************************************ 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86158 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86158 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86158 ']' 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.208 15:46:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.464 [2024-12-06 15:46:55.945596] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:20:07.464 [2024-12-06 15:46:55.945807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86158 ] 00:20:07.464 [2024-12-06 15:46:56.103376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.464 [2024-12-06 15:46:56.141674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.395 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:08.395 [2024-12-06 15:46:56.922957] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:08.395 [2024-12-06 15:46:56.924101] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:08.395 malloc0 00:20:08.395 [2024-12-06 15:46:56.963080] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:08.396 [2024-12-06 15:46:56.963159] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:08.396 [2024-12-06 15:46:56.963172] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:08.396 [2024-12-06 15:46:56.963187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:08.396 [2024-12-06 15:46:56.971232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:08.396 [2024-12-06 15:46:56.971265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:08.396 [2024-12-06 15:46:56.978100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:08.396 [2024-12-06 15:46:56.978216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:08.396 [2024-12-06 15:46:56.994986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:08.396 0 00:20:08.396 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.396 15:46:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:08.396 15:46:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.396 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:08.654 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.654 15:46:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:08.654 "subsystems": [ 00:20:08.654 { 00:20:08.654 "subsystem": "fsdev", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "fsdev_set_opts", 00:20:08.654 "params": { 00:20:08.654 "fsdev_io_pool_size": 65535, 00:20:08.654 "fsdev_io_cache_size": 256 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "keyring", 00:20:08.654 "config": [] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "iobuf", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "iobuf_set_options", 00:20:08.654 "params": { 00:20:08.654 "small_pool_count": 8192, 00:20:08.654 "large_pool_count": 1024, 00:20:08.654 "small_bufsize": 8192, 00:20:08.654 "large_bufsize": 135168, 00:20:08.654 "enable_numa": false 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "sock", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "sock_set_default_impl", 00:20:08.654 "params": { 00:20:08.654 "impl_name": "posix" 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "sock_impl_set_options", 00:20:08.654 "params": { 00:20:08.654 "impl_name": "ssl", 00:20:08.654 "recv_buf_size": 4096, 00:20:08.654 "send_buf_size": 4096, 00:20:08.654 "enable_recv_pipe": true, 00:20:08.654 "enable_quickack": false, 00:20:08.654 "enable_placement_id": 0, 00:20:08.654 "enable_zerocopy_send_server": true, 00:20:08.654 "enable_zerocopy_send_client": false, 00:20:08.654 "zerocopy_threshold": 0, 00:20:08.654 "tls_version": 0, 00:20:08.654 "enable_ktls": false 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "sock_impl_set_options", 00:20:08.654 "params": { 00:20:08.654 "impl_name": "posix", 00:20:08.654 "recv_buf_size": 2097152, 00:20:08.654 "send_buf_size": 2097152, 00:20:08.654 "enable_recv_pipe": true, 00:20:08.654 "enable_quickack": false, 00:20:08.654 "enable_placement_id": 0, 00:20:08.654 "enable_zerocopy_send_server": true, 00:20:08.654 "enable_zerocopy_send_client": false, 00:20:08.654 "zerocopy_threshold": 0, 00:20:08.654 "tls_version": 0, 00:20:08.654 "enable_ktls": false 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "vmd", 00:20:08.654 "config": [] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "accel", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "accel_set_options", 00:20:08.654 "params": { 00:20:08.654 "small_cache_size": 128, 00:20:08.654 "large_cache_size": 16, 00:20:08.654 "task_count": 2048, 00:20:08.654 "sequence_count": 2048, 00:20:08.654 "buf_count": 2048 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "bdev", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "bdev_set_options", 00:20:08.654 "params": { 00:20:08.654 "bdev_io_pool_size": 65535, 00:20:08.654 "bdev_io_cache_size": 256, 00:20:08.654 "bdev_auto_examine": true, 00:20:08.654 "iobuf_small_cache_size": 128, 00:20:08.654 "iobuf_large_cache_size": 16 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_raid_set_options", 00:20:08.654 "params": { 00:20:08.654 "process_window_size_kb": 1024, 00:20:08.654 "process_max_bandwidth_mb_sec": 0 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_iscsi_set_options", 00:20:08.654 "params": { 00:20:08.654 "timeout_sec": 30 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_nvme_set_options", 00:20:08.654 "params": { 00:20:08.654 "action_on_timeout": "none", 00:20:08.654 "timeout_us": 0, 00:20:08.654 "timeout_admin_us": 0, 00:20:08.654 "keep_alive_timeout_ms": 10000, 00:20:08.654 "arbitration_burst": 0, 00:20:08.654 "low_priority_weight": 0, 00:20:08.654 "medium_priority_weight": 0, 00:20:08.654 "high_priority_weight": 0, 00:20:08.654 "nvme_adminq_poll_period_us": 10000, 00:20:08.654 "nvme_ioq_poll_period_us": 0, 00:20:08.654 "io_queue_requests": 0, 00:20:08.654 "delay_cmd_submit": true, 00:20:08.654 "transport_retry_count": 4, 00:20:08.654 "bdev_retry_count": 3, 00:20:08.654 "transport_ack_timeout": 0, 00:20:08.654 "ctrlr_loss_timeout_sec": 0, 00:20:08.654 "reconnect_delay_sec": 0, 00:20:08.654 "fast_io_fail_timeout_sec": 0, 00:20:08.654 "disable_auto_failback": false, 00:20:08.654 "generate_uuids": false, 00:20:08.654 "transport_tos": 0, 00:20:08.654 "nvme_error_stat": false, 00:20:08.654 "rdma_srq_size": 0, 00:20:08.654 "io_path_stat": false, 00:20:08.654 "allow_accel_sequence": false, 00:20:08.654 "rdma_max_cq_size": 0, 00:20:08.654 "rdma_cm_event_timeout_ms": 0, 00:20:08.654 "dhchap_digests": [ 00:20:08.654 "sha256", 00:20:08.654 "sha384", 00:20:08.654 "sha512" 00:20:08.654 ], 00:20:08.654 "dhchap_dhgroups": [ 00:20:08.654 "null", 00:20:08.654 "ffdhe2048", 00:20:08.654 "ffdhe3072", 00:20:08.654 "ffdhe4096", 00:20:08.654 "ffdhe6144", 00:20:08.654 "ffdhe8192" 00:20:08.654 ] 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_nvme_set_hotplug", 00:20:08.654 "params": { 00:20:08.654 "period_us": 100000, 00:20:08.654 "enable": false 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_malloc_create", 00:20:08.654 "params": { 00:20:08.654 "name": "malloc0", 00:20:08.654 "num_blocks": 8192, 00:20:08.654 "block_size": 4096, 00:20:08.654 "physical_block_size": 4096, 00:20:08.654 "uuid": "a398307c-5433-47fc-b8cc-f799b971fdbc", 00:20:08.654 "optimal_io_boundary": 0, 00:20:08.654 "md_size": 0, 00:20:08.654 "dif_type": 0, 00:20:08.654 "dif_is_head_of_md": false, 00:20:08.654 "dif_pi_format": 0 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "bdev_wait_for_examine" 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "scsi", 00:20:08.654 "config": null 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "scheduler", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "framework_set_scheduler", 00:20:08.654 "params": { 00:20:08.654 "name": "static" 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "vhost_scsi", 00:20:08.654 "config": [] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "vhost_blk", 00:20:08.654 "config": [] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "ublk", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "ublk_create_target", 00:20:08.654 "params": { 00:20:08.654 "cpumask": "1" 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "ublk_start_disk", 00:20:08.654 "params": { 00:20:08.654 "bdev_name": "malloc0", 00:20:08.654 "ublk_id": 0, 00:20:08.654 "num_queues": 1, 00:20:08.654 "queue_depth": 128 00:20:08.654 } 00:20:08.654 } 00:20:08.654 ] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "nbd", 00:20:08.654 "config": [] 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "subsystem": "nvmf", 00:20:08.654 "config": [ 00:20:08.654 { 00:20:08.654 "method": "nvmf_set_config", 00:20:08.654 "params": { 00:20:08.654 "discovery_filter": "match_any", 00:20:08.654 "admin_cmd_passthru": { 00:20:08.654 "identify_ctrlr": false 00:20:08.654 }, 00:20:08.654 "dhchap_digests": [ 00:20:08.654 "sha256", 00:20:08.654 "sha384", 00:20:08.654 "sha512" 00:20:08.654 ], 00:20:08.654 "dhchap_dhgroups": [ 00:20:08.654 "null", 00:20:08.654 "ffdhe2048", 00:20:08.654 "ffdhe3072", 00:20:08.654 "ffdhe4096", 00:20:08.654 "ffdhe6144", 00:20:08.654 "ffdhe8192" 00:20:08.654 ] 00:20:08.654 } 00:20:08.654 }, 00:20:08.654 { 00:20:08.654 "method": "nvmf_set_max_subsystems", 00:20:08.654 "params": { 00:20:08.655 "max_subsystems": 1024 00:20:08.655 } 00:20:08.655 }, 00:20:08.655 { 00:20:08.655 "method": "nvmf_set_crdt", 00:20:08.655 "params": { 00:20:08.655 "crdt1": 0, 00:20:08.655 "crdt2": 0, 00:20:08.655 "crdt3": 0 00:20:08.655 } 00:20:08.655 } 00:20:08.655 ] 00:20:08.655 }, 00:20:08.655 { 00:20:08.655 "subsystem": "iscsi", 00:20:08.655 "config": [ 00:20:08.655 { 00:20:08.655 "method": "iscsi_set_options", 00:20:08.655 "params": { 00:20:08.655 "node_base": "iqn.2016-06.io.spdk", 00:20:08.655 "max_sessions": 128, 00:20:08.655 "max_connections_per_session": 2, 00:20:08.655 "max_queue_depth": 64, 00:20:08.655 "default_time2wait": 2, 00:20:08.655 "default_time2retain": 20, 00:20:08.655 "first_burst_length": 8192, 00:20:08.655 "immediate_data": true, 00:20:08.655 "allow_duplicated_isid": false, 00:20:08.655 "error_recovery_level": 0, 00:20:08.655 "nop_timeout": 60, 00:20:08.655 "nop_in_interval": 30, 00:20:08.655 "disable_chap": false, 00:20:08.655 "require_chap": false, 00:20:08.655 "mutual_chap": false, 00:20:08.655 "chap_group": 0, 00:20:08.655 "max_large_datain_per_connection": 64, 00:20:08.655 "max_r2t_per_connection": 4, 00:20:08.655 "pdu_pool_size": 36864, 00:20:08.655 "immediate_data_pool_size": 16384, 00:20:08.655 "data_out_pool_size": 2048 00:20:08.655 } 00:20:08.655 } 00:20:08.655 ] 00:20:08.655 } 00:20:08.655 ] 00:20:08.655 }' 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86158 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86158 ']' 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86158 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86158 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.655 killing process with pid 86158 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86158' 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86158 00:20:08.655 15:46:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86158 00:20:09.220 [2024-12-06 15:46:57.795636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:09.220 [2024-12-06 15:46:57.830051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:09.220 [2024-12-06 15:46:57.830187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:09.220 [2024-12-06 15:46:57.839983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:09.220 [2024-12-06 15:46:57.840045] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:09.220 [2024-12-06 15:46:57.840057] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:09.220 [2024-12-06 15:46:57.840092] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:09.220 [2024-12-06 15:46:57.840253] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86196 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86196 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86196 ']' 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.786 15:46:58 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:09.786 "subsystems": [ 00:20:09.786 { 00:20:09.786 "subsystem": "fsdev", 00:20:09.786 "config": [ 00:20:09.786 { 00:20:09.786 "method": "fsdev_set_opts", 00:20:09.786 "params": { 00:20:09.786 "fsdev_io_pool_size": 65535, 00:20:09.786 "fsdev_io_cache_size": 256 00:20:09.786 } 00:20:09.786 } 00:20:09.786 ] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "keyring", 00:20:09.786 "config": [] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "iobuf", 00:20:09.786 "config": [ 00:20:09.786 { 00:20:09.786 "method": "iobuf_set_options", 00:20:09.786 "params": { 00:20:09.786 "small_pool_count": 8192, 00:20:09.786 "large_pool_count": 1024, 00:20:09.786 "small_bufsize": 8192, 00:20:09.786 "large_bufsize": 135168, 00:20:09.786 "enable_numa": false 00:20:09.786 } 00:20:09.786 } 00:20:09.786 ] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "sock", 00:20:09.786 "config": [ 00:20:09.786 { 00:20:09.786 "method": "sock_set_default_impl", 00:20:09.786 "params": { 00:20:09.786 "impl_name": "posix" 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "sock_impl_set_options", 00:20:09.786 "params": { 00:20:09.786 "impl_name": "ssl", 00:20:09.786 "recv_buf_size": 4096, 00:20:09.786 "send_buf_size": 4096, 00:20:09.786 "enable_recv_pipe": true, 00:20:09.786 "enable_quickack": false, 00:20:09.786 "enable_placement_id": 0, 00:20:09.786 "enable_zerocopy_send_server": true, 00:20:09.786 "enable_zerocopy_send_client": false, 00:20:09.786 "zerocopy_threshold": 0, 00:20:09.786 "tls_version": 0, 00:20:09.786 "enable_ktls": false 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "sock_impl_set_options", 00:20:09.786 "params": { 00:20:09.786 "impl_name": "posix", 00:20:09.786 "recv_buf_size": 2097152, 00:20:09.786 "send_buf_size": 2097152, 00:20:09.786 "enable_recv_pipe": true, 00:20:09.786 "enable_quickack": false, 00:20:09.786 "enable_placement_id": 0, 00:20:09.786 "enable_zerocopy_send_server": true, 00:20:09.786 "enable_zerocopy_send_client": false, 00:20:09.786 "zerocopy_threshold": 0, 00:20:09.786 "tls_version": 0, 00:20:09.786 "enable_ktls": false 00:20:09.786 } 00:20:09.786 } 00:20:09.786 ] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "vmd", 00:20:09.786 "config": [] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "accel", 00:20:09.786 "config": [ 00:20:09.786 { 00:20:09.786 "method": "accel_set_options", 00:20:09.786 "params": { 00:20:09.786 "small_cache_size": 128, 00:20:09.786 "large_cache_size": 16, 00:20:09.786 "task_count": 2048, 00:20:09.786 "sequence_count": 2048, 00:20:09.786 "buf_count": 2048 00:20:09.786 } 00:20:09.786 } 00:20:09.786 ] 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "subsystem": "bdev", 00:20:09.786 "config": [ 00:20:09.786 { 00:20:09.786 "method": "bdev_set_options", 00:20:09.786 "params": { 00:20:09.786 "bdev_io_pool_size": 65535, 00:20:09.786 "bdev_io_cache_size": 256, 00:20:09.786 "bdev_auto_examine": true, 00:20:09.786 "iobuf_small_cache_size": 128, 00:20:09.786 "iobuf_large_cache_size": 16 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "bdev_raid_set_options", 00:20:09.786 "params": { 00:20:09.786 "process_window_size_kb": 1024, 00:20:09.786 "process_max_bandwidth_mb_sec": 0 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "bdev_iscsi_set_options", 00:20:09.786 "params": { 00:20:09.786 "timeout_sec": 30 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "bdev_nvme_set_options", 00:20:09.786 "params": { 00:20:09.786 "action_on_timeout": "none", 00:20:09.786 "timeout_us": 0, 00:20:09.786 "timeout_admin_us": 0, 00:20:09.786 "keep_alive_timeout_ms": 10000, 00:20:09.786 "arbitration_burst": 0, 00:20:09.786 "low_priority_weight": 0, 00:20:09.786 "medium_priority_weight": 0, 00:20:09.786 "high_priority_weight": 0, 00:20:09.786 "nvme_adminq_poll_period_us": 10000, 00:20:09.786 "nvme_ioq_poll_period_us": 0, 00:20:09.786 "io_queue_requests": 0, 00:20:09.786 "delay_cmd_submit": true, 00:20:09.786 "transport_retry_count": 4, 00:20:09.786 "bdev_retry_count": 3, 00:20:09.786 "transport_ack_timeout": 0, 00:20:09.786 "ctrlr_loss_timeout_sec": 0, 00:20:09.786 "reconnect_delay_sec": 0, 00:20:09.786 "fast_io_fail_timeout_sec": 0, 00:20:09.786 "disable_auto_failback": false, 00:20:09.786 "generate_uuids": false, 00:20:09.786 "transport_tos": 0, 00:20:09.786 "nvme_error_stat": false, 00:20:09.786 "rdma_srq_size": 0, 00:20:09.786 "io_path_stat": false, 00:20:09.786 "allow_accel_sequence": false, 00:20:09.786 "rdma_max_cq_size": 0, 00:20:09.786 "rdma_cm_event_timeout_ms": 0, 00:20:09.786 "dhchap_digests": [ 00:20:09.786 "sha256", 00:20:09.786 "sha384", 00:20:09.786 "sha512" 00:20:09.786 ], 00:20:09.786 "dhchap_dhgroups": [ 00:20:09.786 "null", 00:20:09.786 "ffdhe2048", 00:20:09.786 "ffdhe3072", 00:20:09.786 "ffdhe4096", 00:20:09.786 "ffdhe6144", 00:20:09.786 "ffdhe8192" 00:20:09.786 ] 00:20:09.786 } 00:20:09.786 }, 00:20:09.786 { 00:20:09.786 "method": "bdev_nvme_set_hotplug", 00:20:09.786 "params": { 00:20:09.786 "period_us": 100000, 00:20:09.786 "enable": false 00:20:09.786 } 00:20:09.786 }, 00:20:09.787 { 00:20:09.787 "method": "bdev_malloc_create", 00:20:09.787 "params": { 00:20:09.787 "name": "malloc0", 00:20:09.787 "num_blocks": 8192, 00:20:09.787 "block_size": 4096, 00:20:09.787 "physical_block_size": 4096, 00:20:09.787 "uuid": "a398307c-5433-47fc-b8cc-f799b971fdbc", 00:20:09.787 "optimal_io_boundary": 0, 00:20:09.787 "md_size": 0, 00:20:09.787 "dif_type": 0, 00:20:09.787 "dif_is_head_of_md": false, 00:20:09.787 "dif_pi_format": 0 00:20:09.787 } 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "method": "bdev_wait_for_examine" 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "scsi", 00:20:09.787 "config": null 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "scheduler", 00:20:09.787 "config": [ 00:20:09.787 { 00:20:09.787 "method": "framework_set_scheduler", 00:20:09.787 "params": { 00:20:09.787 "name": "static" 00:20:09.787 } 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "vhost_scsi", 00:20:09.787 "config": [] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "vhost_blk", 00:20:09.787 "config": [] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "ublk", 00:20:09.787 "config": [ 00:20:09.787 { 00:20:09.787 "method": "ublk_create_target", 00:20:09.787 "params": { 00:20:09.787 "cpumask": "1" 00:20:09.787 } 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "method": "ublk_start_disk", 00:20:09.787 "params": { 00:20:09.787 "bdev_name": "malloc0", 00:20:09.787 "ublk_id": 0, 00:20:09.787 "num_queues": 1, 00:20:09.787 "queue_depth": 128 00:20:09.787 } 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "nbd", 00:20:09.787 "config": [] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "nvmf", 00:20:09.787 "config": [ 00:20:09.787 { 00:20:09.787 "method": "nvmf_set_config", 00:20:09.787 "params": { 00:20:09.787 "discovery_filter": "match_any", 00:20:09.787 "admin_cmd_passthru": { 00:20:09.787 "identify_ctrlr": false 00:20:09.787 }, 00:20:09.787 "dhchap_digests": [ 00:20:09.787 "sha256", 00:20:09.787 "sha384", 00:20:09.787 "sha512" 00:20:09.787 ], 00:20:09.787 "dhchap_dhgroups": [ 00:20:09.787 "null", 00:20:09.787 "ffdhe2048", 00:20:09.787 "ffdhe3072", 00:20:09.787 "ffdhe4096", 00:20:09.787 "ffdhe6144", 00:20:09.787 "ffdhe8192" 00:20:09.787 ] 00:20:09.787 } 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "method": "nvmf_set_max_subsystems", 00:20:09.787 "params": { 00:20:09.787 "max_subsystems": 1024 00:20:09.787 } 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "method": "nvmf_set_crdt", 00:20:09.787 "params": { 00:20:09.787 "crdt1": 0, 00:20:09.787 "crdt2": 0, 00:20:09.787 "crdt3": 0 00:20:09.787 } 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 }, 00:20:09.787 { 00:20:09.787 "subsystem": "iscsi", 00:20:09.787 "config": [ 00:20:09.787 { 00:20:09.787 "method": "iscsi_set_options", 00:20:09.787 "params": { 00:20:09.787 "node_base": "iqn.2016-06.io.spdk", 00:20:09.787 "max_sessions": 128, 00:20:09.787 "max_connections_per_session": 2, 00:20:09.787 "max_queue_depth": 64, 00:20:09.787 "default_time2wait": 2, 00:20:09.787 "default_time2retain": 20, 00:20:09.787 "first_burst_length": 8192, 00:20:09.787 "immediate_data": true, 00:20:09.787 "allow_duplicated_isid": false, 00:20:09.787 "error_recovery_level": 0, 00:20:09.787 "nop_timeout": 60, 00:20:09.787 "nop_in_interval": 30, 00:20:09.787 "disable_chap": false, 00:20:09.787 "require_chap": false, 00:20:09.787 "mutual_chap": false, 00:20:09.787 "chap_group": 0, 00:20:09.787 "max_large_datain_per_connection": 64, 00:20:09.787 "max_r2t_per_connection": 4, 00:20:09.787 "pdu_pool_size": 36864, 00:20:09.787 "immediate_data_pool_size": 16384, 00:20:09.787 "data_out_pool_size": 2048 00:20:09.787 } 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 } 00:20:09.787 ] 00:20:09.787 }' 00:20:09.787 15:46:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.045 [2024-12-06 15:46:58.549099] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:20:10.045 [2024-12-06 15:46:58.549297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86196 ] 00:20:10.045 [2024-12-06 15:46:58.703771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.303 [2024-12-06 15:46:58.741205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.561 [2024-12-06 15:46:59.224961] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:10.561 [2024-12-06 15:46:59.225404] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:10.561 [2024-12-06 15:46:59.233103] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:10.561 [2024-12-06 15:46:59.233174] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:10.561 [2024-12-06 15:46:59.233186] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:10.561 [2024-12-06 15:46:59.233203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:10.561 [2024-12-06 15:46:59.241092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:10.561 [2024-12-06 15:46:59.241110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:10.561 [2024-12-06 15:46:59.248968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:10.561 [2024-12-06 15:46:59.249067] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:10.819 [2024-12-06 15:46:59.266002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86196 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86196 ']' 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86196 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86196 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.819 killing process with pid 86196 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86196' 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86196 00:20:10.819 15:46:59 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86196 00:20:11.387 [2024-12-06 15:46:59.933822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:11.387 [2024-12-06 15:46:59.971992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:11.387 [2024-12-06 15:46:59.972133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:11.387 [2024-12-06 15:46:59.980034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:11.387 [2024-12-06 15:46:59.980082] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:11.387 [2024-12-06 15:46:59.980099] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:11.387 [2024-12-06 15:46:59.980133] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:11.387 [2024-12-06 15:46:59.980277] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:11.954 15:47:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:11.954 00:20:11.954 real 0m4.749s 00:20:11.954 user 0m3.223s 00:20:11.954 sys 0m2.462s 00:20:11.954 15:47:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.954 15:47:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:11.954 ************************************ 00:20:11.955 END TEST test_save_ublk_config 00:20:11.955 ************************************ 00:20:11.955 15:47:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86258 00:20:11.955 15:47:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.955 15:47:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:11.955 15:47:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86258 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@835 -- # '[' -z 86258 ']' 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.955 15:47:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:12.214 [2024-12-06 15:47:00.732881] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:20:12.214 [2024-12-06 15:47:00.733104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86258 ] 00:20:12.214 [2024-12-06 15:47:00.888677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:12.472 [2024-12-06 15:47:00.926184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.472 [2024-12-06 15:47:00.926242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.039 15:47:01 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.039 15:47:01 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:13.039 15:47:01 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:13.039 15:47:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.039 15:47:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.039 15:47:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.039 ************************************ 00:20:13.039 START TEST test_create_ublk 00:20:13.039 ************************************ 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:13.039 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.039 [2024-12-06 15:47:01.684004] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:13.039 [2024-12-06 15:47:01.686264] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.039 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:13.039 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.039 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.298 [2024-12-06 15:47:01.785140] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:13.298 [2024-12-06 15:47:01.785658] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:13.298 [2024-12-06 15:47:01.785678] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:13.298 [2024-12-06 15:47:01.785691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:13.298 [2024-12-06 15:47:01.793413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:13.298 [2024-12-06 15:47:01.793442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:13.298 [2024-12-06 15:47:01.801022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:13.298 [2024-12-06 15:47:01.801734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:13.298 [2024-12-06 15:47:01.819993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.298 15:47:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:13.298 { 00:20:13.298 "ublk_device": "/dev/ublkb0", 00:20:13.298 "id": 0, 00:20:13.298 "queue_depth": 512, 00:20:13.298 "num_queues": 4, 00:20:13.298 "bdev_name": "Malloc0" 00:20:13.298 } 00:20:13.298 ]' 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:13.298 15:47:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:13.557 15:47:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:13.557 15:47:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:13.557 15:47:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:13.557 15:47:02 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:13.557 15:47:02 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:13.557 fio: verification read phase will never start because write phase uses all of runtime 00:20:13.557 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:13.557 fio-3.35 00:20:13.557 Starting 1 process 00:20:25.763 00:20:25.763 fio_test: (groupid=0, jobs=1): err= 0: pid=86297: Fri Dec 6 15:47:12 2024 00:20:25.763 write: IOPS=9433, BW=36.8MiB/s (38.6MB/s)(369MiB/10001msec); 0 zone resets 00:20:25.763 clat (usec): min=46, max=3977, avg=104.92, stdev=125.22 00:20:25.763 lat (usec): min=47, max=3977, avg=105.52, stdev=125.24 00:20:25.763 clat percentiles (usec): 00:20:25.763 | 1.00th=[ 52], 5.00th=[ 86], 10.00th=[ 91], 20.00th=[ 93], 00:20:25.763 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 98], 00:20:25.763 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 123], 00:20:25.763 | 99.00th=[ 149], 99.50th=[ 172], 99.90th=[ 2606], 99.95th=[ 3064], 00:20:25.763 | 99.99th=[ 3589] 00:20:25.763 bw ( KiB/s): min=36608, max=48432, per=100.00%, avg=37771.37, stdev=2599.43, samples=19 00:20:25.763 iops : min= 9152, max=12108, avg=9442.84, stdev=649.86, samples=19 00:20:25.763 lat (usec) : 50=0.14%, 100=68.00%, 250=31.49%, 500=0.03%, 750=0.03% 00:20:25.763 lat (usec) : 1000=0.02% 00:20:25.763 lat (msec) : 2=0.11%, 4=0.18% 00:20:25.763 cpu : usr=2.00%, sys=5.09%, ctx=94344, majf=0, minf=796 00:20:25.763 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.763 issued rwts: total=0,94344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.763 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:25.763 00:20:25.763 Run status group 0 (all jobs): 00:20:25.763 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=369MiB (386MB), run=10001-10001msec 00:20:25.763 00:20:25.763 Disk stats (read/write): 00:20:25.763 ublkb0: ios=0/93350, merge=0/0, ticks=0/9243, in_queue=9243, util=99.09% 00:20:25.763 15:47:12 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 [2024-12-06 15:47:12.336144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:25.763 [2024-12-06 15:47:12.385017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:25.763 [2024-12-06 15:47:12.385972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:25.763 [2024-12-06 15:47:12.395001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:25.763 [2024-12-06 15:47:12.395407] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:25.763 [2024-12-06 15:47:12.395427] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 [2024-12-06 15:47:12.411078] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:25.763 request: 00:20:25.763 { 00:20:25.763 "ublk_id": 0, 00:20:25.763 "method": "ublk_stop_disk", 00:20:25.763 "req_id": 1 00:20:25.763 } 00:20:25.763 Got JSON-RPC error response 00:20:25.763 response: 00:20:25.763 { 00:20:25.763 "code": -19, 00:20:25.763 "message": "No such device" 00:20:25.763 } 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.763 15:47:12 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 [2024-12-06 15:47:12.427061] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.763 [2024-12-06 15:47:12.429742] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:25.763 [2024-12-06 15:47:12.429788] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:25.763 15:47:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:25.763 00:20:25.763 real 0m11.010s 00:20:25.763 user 0m0.624s 00:20:25.763 sys 0m0.624s 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 ************************************ 00:20:25.763 END TEST test_create_ublk 00:20:25.763 ************************************ 00:20:25.763 15:47:12 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:25.763 15:47:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.763 15:47:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.763 15:47:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 ************************************ 00:20:25.763 START TEST test_create_multi_ublk 00:20:25.763 ************************************ 00:20:25.763 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:25.763 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:25.763 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.763 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.763 [2024-12-06 15:47:12.748009] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:25.764 [2024-12-06 15:47:12.749175] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 [2024-12-06 15:47:12.871174] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:25.764 [2024-12-06 15:47:12.871658] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:25.764 [2024-12-06 15:47:12.871680] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:25.764 [2024-12-06 15:47:12.871688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:25.764 [2024-12-06 15:47:12.883031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:25.764 [2024-12-06 15:47:12.883054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:25.764 [2024-12-06 15:47:12.894977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:25.764 [2024-12-06 15:47:12.895603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:25.764 [2024-12-06 15:47:12.921996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 [2024-12-06 15:47:13.041138] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:25.764 [2024-12-06 15:47:13.041623] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:25.764 [2024-12-06 15:47:13.041642] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:25.764 [2024-12-06 15:47:13.041653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:25.764 [2024-12-06 15:47:13.053039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:25.764 [2024-12-06 15:47:13.053068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:25.764 [2024-12-06 15:47:13.064995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:25.764 [2024-12-06 15:47:13.065628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:25.764 [2024-12-06 15:47:13.101036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 [2024-12-06 15:47:13.233113] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:25.764 [2024-12-06 15:47:13.233607] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:25.764 [2024-12-06 15:47:13.233630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:25.764 [2024-12-06 15:47:13.233639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:25.764 [2024-12-06 15:47:13.245073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:25.764 [2024-12-06 15:47:13.245098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:25.764 [2024-12-06 15:47:13.256995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:25.764 [2024-12-06 15:47:13.257631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:25.764 [2024-12-06 15:47:13.270046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 [2024-12-06 15:47:13.400127] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:25.764 [2024-12-06 15:47:13.400626] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:25.764 [2024-12-06 15:47:13.400647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:25.764 [2024-12-06 15:47:13.400659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:25.764 [2024-12-06 15:47:13.412006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:25.764 [2024-12-06 15:47:13.412034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:25.764 [2024-12-06 15:47:13.423976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:25.764 [2024-12-06 15:47:13.424628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:25.764 [2024-12-06 15:47:13.448968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:25.764 { 00:20:25.764 "ublk_device": "/dev/ublkb0", 00:20:25.764 "id": 0, 00:20:25.764 "queue_depth": 512, 00:20:25.764 "num_queues": 4, 00:20:25.764 "bdev_name": "Malloc0" 00:20:25.764 }, 00:20:25.764 { 00:20:25.764 "ublk_device": "/dev/ublkb1", 00:20:25.764 "id": 1, 00:20:25.764 "queue_depth": 512, 00:20:25.764 "num_queues": 4, 00:20:25.764 "bdev_name": "Malloc1" 00:20:25.764 }, 00:20:25.764 { 00:20:25.764 "ublk_device": "/dev/ublkb2", 00:20:25.764 "id": 2, 00:20:25.764 "queue_depth": 512, 00:20:25.764 "num_queues": 4, 00:20:25.764 "bdev_name": "Malloc2" 00:20:25.764 }, 00:20:25.764 { 00:20:25.764 "ublk_device": "/dev/ublkb3", 00:20:25.764 "id": 3, 00:20:25.764 "queue_depth": 512, 00:20:25.764 "num_queues": 4, 00:20:25.764 "bdev_name": "Malloc3" 00:20:25.764 } 00:20:25.764 ]' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:25.764 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:25.765 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:25.765 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:25.765 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:25.765 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:25.765 15:47:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:25.765 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 [2024-12-06 15:47:14.531225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:26.025 [2024-12-06 15:47:14.568038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:26.025 [2024-12-06 15:47:14.569269] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:26.025 [2024-12-06 15:47:14.576010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:26.025 [2024-12-06 15:47:14.576346] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:26.025 [2024-12-06 15:47:14.576364] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 [2024-12-06 15:47:14.589069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:26.025 [2024-12-06 15:47:14.628025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:26.025 [2024-12-06 15:47:14.629187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:26.025 [2024-12-06 15:47:14.635013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:26.025 [2024-12-06 15:47:14.635311] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:26.025 [2024-12-06 15:47:14.635328] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 [2024-12-06 15:47:14.652074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:26.025 [2024-12-06 15:47:14.682520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:26.025 [2024-12-06 15:47:14.683757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:26.025 [2024-12-06 15:47:14.687997] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:26.025 [2024-12-06 15:47:14.688373] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:26.025 [2024-12-06 15:47:14.688388] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.025 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 [2024-12-06 15:47:14.701073] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:26.284 [2024-12-06 15:47:14.741412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:26.285 [2024-12-06 15:47:14.742653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:26.285 [2024-12-06 15:47:14.746992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:26.285 [2024-12-06 15:47:14.747627] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:26.285 [2024-12-06 15:47:14.747646] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:26.285 15:47:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.285 15:47:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:26.544 [2024-12-06 15:47:15.041050] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:26.544 [2024-12-06 15:47:15.043784] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:26.544 [2024-12-06 15:47:15.043822] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.544 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.802 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:27.060 00:20:27.060 real 0m2.896s 00:20:27.060 user 0m1.310s 00:20:27.060 sys 0m0.184s 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.060 15:47:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.060 ************************************ 00:20:27.060 END TEST test_create_multi_ublk 00:20:27.060 ************************************ 00:20:27.060 15:47:15 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:27.060 15:47:15 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:27.060 15:47:15 ublk -- ublk/ublk.sh@130 -- # killprocess 86258 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@954 -- # '[' -z 86258 ']' 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@958 -- # kill -0 86258 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@959 -- # uname 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86258 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.060 killing process with pid 86258 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86258' 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@973 -- # kill 86258 00:20:27.060 15:47:15 ublk -- common/autotest_common.sh@978 -- # wait 86258 00:20:27.318 [2024-12-06 15:47:15.956559] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:27.318 [2024-12-06 15:47:15.956698] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:27.885 00:20:27.885 real 0m20.674s 00:20:27.885 user 0m31.718s 00:20:27.885 sys 0m8.224s 00:20:27.885 15:47:16 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.885 ************************************ 00:20:27.885 15:47:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.885 END TEST ublk 00:20:27.885 ************************************ 00:20:27.885 15:47:16 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:27.885 15:47:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:27.885 15:47:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.885 15:47:16 -- common/autotest_common.sh@10 -- # set +x 00:20:27.885 ************************************ 00:20:27.885 START TEST ublk_recovery 00:20:27.885 ************************************ 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:27.885 * Looking for test storage... 00:20:27.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.885 15:47:16 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:27.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.885 --rc genhtml_branch_coverage=1 00:20:27.885 --rc genhtml_function_coverage=1 00:20:27.885 --rc genhtml_legend=1 00:20:27.885 --rc geninfo_all_blocks=1 00:20:27.885 --rc geninfo_unexecuted_blocks=1 00:20:27.885 00:20:27.885 ' 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:27.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.885 --rc genhtml_branch_coverage=1 00:20:27.885 --rc genhtml_function_coverage=1 00:20:27.885 --rc genhtml_legend=1 00:20:27.885 --rc geninfo_all_blocks=1 00:20:27.885 --rc geninfo_unexecuted_blocks=1 00:20:27.885 00:20:27.885 ' 00:20:27.885 15:47:16 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:27.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.885 --rc genhtml_branch_coverage=1 00:20:27.886 --rc genhtml_function_coverage=1 00:20:27.886 --rc genhtml_legend=1 00:20:27.886 --rc geninfo_all_blocks=1 00:20:27.886 --rc geninfo_unexecuted_blocks=1 00:20:27.886 00:20:27.886 ' 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:27.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.886 --rc genhtml_branch_coverage=1 00:20:27.886 --rc genhtml_function_coverage=1 00:20:27.886 --rc genhtml_legend=1 00:20:27.886 --rc geninfo_all_blocks=1 00:20:27.886 --rc geninfo_unexecuted_blocks=1 00:20:27.886 00:20:27.886 ' 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:27.886 15:47:16 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86634 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86634 00:20:27.886 15:47:16 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86634 ']' 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.886 15:47:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 [2024-12-06 15:47:16.618295] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:20:28.144 [2024-12-06 15:47:16.618444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86634 ] 00:20:28.144 [2024-12-06 15:47:16.763407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:28.144 [2024-12-06 15:47:16.805010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.144 [2024-12-06 15:47:16.805076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:28.712 15:47:17 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.712 [2024-12-06 15:47:17.144967] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:28.712 [2024-12-06 15:47:17.146752] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.712 15:47:17 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.712 malloc0 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.712 15:47:17 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.712 [2024-12-06 15:47:17.200122] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:28.712 [2024-12-06 15:47:17.200260] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:28.712 [2024-12-06 15:47:17.200273] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:28.712 [2024-12-06 15:47:17.200284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:28.712 [2024-12-06 15:47:17.208140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:28.712 [2024-12-06 15:47:17.208171] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:28.712 [2024-12-06 15:47:17.215985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:28.712 [2024-12-06 15:47:17.216150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:28.712 [2024-12-06 15:47:17.241983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:28.712 1 00:20:28.712 15:47:17 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.712 15:47:17 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:29.661 15:47:18 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86662 00:20:29.661 15:47:18 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:29.661 15:47:18 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:29.920 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:29.920 fio-3.35 00:20:29.920 Starting 1 process 00:20:35.229 15:47:23 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86634 00:20:35.229 15:47:23 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:40.515 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86634 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:40.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.515 15:47:28 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=86772 00:20:40.515 15:47:28 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:40.515 15:47:28 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.515 15:47:28 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 86772 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86772 ']' 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.515 15:47:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.515 [2024-12-06 15:47:28.394881] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:20:40.515 [2024-12-06 15:47:28.395443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86772 ] 00:20:40.515 [2024-12-06 15:47:28.561454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:40.515 [2024-12-06 15:47:28.611081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.515 [2024-12-06 15:47:28.611121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:40.774 15:47:29 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.774 [2024-12-06 15:47:29.306997] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:40.774 [2024-12-06 15:47:29.308801] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.774 15:47:29 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.774 malloc0 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.774 15:47:29 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:40.774 [2024-12-06 15:47:29.355420] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:40.774 [2024-12-06 15:47:29.355487] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:40.774 [2024-12-06 15:47:29.355498] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:40.774 [2024-12-06 15:47:29.363039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:40.774 [2024-12-06 15:47:29.363064] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:40.774 [2024-12-06 15:47:29.363080] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:40.774 [2024-12-06 15:47:29.363159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:40.774 1 00:20:40.774 15:47:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.774 15:47:29 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86662 00:20:40.774 [2024-12-06 15:47:29.371029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:40.774 [2024-12-06 15:47:29.378720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:40.774 [2024-12-06 15:47:29.386240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:40.774 [2024-12-06 15:47:29.386264] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:36.999 00:21:36.999 fio_test: (groupid=0, jobs=1): err= 0: pid=86665: Fri Dec 6 15:48:18 2024 00:21:36.999 read: IOPS=20.6k, BW=80.5MiB/s (84.4MB/s)(4827MiB/60002msec) 00:21:36.999 slat (usec): min=2, max=273, avg= 5.74, stdev= 2.62 00:21:36.999 clat (usec): min=728, max=6141.1k, avg=3056.49, stdev=44513.57 00:21:36.999 lat (usec): min=732, max=6141.1k, avg=3062.23, stdev=44513.58 00:21:36.999 clat percentiles (usec): 00:21:36.999 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2507], 00:21:36.999 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:21:36.999 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3589], 00:21:36.999 | 99.00th=[ 5735], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 8586], 00:21:36.999 | 99.99th=[13042] 00:21:36.999 bw ( KiB/s): min= 8256, max=96176, per=100.00%, avg=90756.29, stdev=11720.12, samples=108 00:21:36.999 iops : min= 2064, max=24044, avg=22689.06, stdev=2930.03, samples=108 00:21:36.999 write: IOPS=20.6k, BW=80.4MiB/s (84.3MB/s)(4823MiB/60002msec); 0 zone resets 00:21:36.999 slat (usec): min=2, max=459, avg= 6.02, stdev= 2.68 00:21:36.999 clat (usec): min=766, max=6141.1k, avg=3147.91, stdev=43842.47 00:21:36.999 lat (usec): min=771, max=6141.1k, avg=3153.94, stdev=43842.47 00:21:36.999 clat percentiles (usec): 00:21:36.999 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:21:36.999 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:21:36.999 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2999], 95.00th=[ 3490], 00:21:36.999 | 99.00th=[ 5800], 99.50th=[ 6390], 99.90th=[ 7767], 99.95th=[ 8717], 00:21:36.999 | 99.99th=[13173] 00:21:36.999 bw ( KiB/s): min= 8128, max=96248, per=100.00%, avg=90680.97, stdev=11726.94, samples=108 00:21:36.999 iops : min= 2032, max=24062, avg=22670.24, stdev=2931.73, samples=108 00:21:36.999 lat (usec) : 750=0.01%, 1000=0.01% 00:21:36.999 lat (msec) : 2=0.24%, 4=96.07%, 10=3.66%, 20=0.02%, >=2000=0.01% 00:21:36.999 cpu : usr=9.89%, sys=22.77%, ctx=72981, majf=0, minf=13 00:21:36.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:36.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:36.999 issued rwts: total=1235784,1234676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:36.999 00:21:36.999 Run status group 0 (all jobs): 00:21:36.999 READ: bw=80.5MiB/s (84.4MB/s), 80.5MiB/s-80.5MiB/s (84.4MB/s-84.4MB/s), io=4827MiB (5062MB), run=60002-60002msec 00:21:36.999 WRITE: bw=80.4MiB/s (84.3MB/s), 80.4MiB/s-80.4MiB/s (84.3MB/s-84.3MB/s), io=4823MiB (5057MB), run=60002-60002msec 00:21:36.999 00:21:36.999 Disk stats (read/write): 00:21:36.999 ublkb1: ios=1233163/1232002, merge=0/0, ticks=3666345/3646035, in_queue=7312380, util=99.93% 00:21:36.999 15:48:18 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.999 [2024-12-06 15:48:18.524168] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:36.999 [2024-12-06 15:48:18.556085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:36.999 [2024-12-06 15:48:18.556263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:36.999 [2024-12-06 15:48:18.563003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:36.999 [2024-12-06 15:48:18.563133] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:36.999 [2024-12-06 15:48:18.563146] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.999 15:48:18 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.999 [2024-12-06 15:48:18.579065] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:36.999 [2024-12-06 15:48:18.581777] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:36.999 [2024-12-06 15:48:18.581818] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.999 15:48:18 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:36.999 15:48:18 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:36.999 15:48:18 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 86772 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 86772 ']' 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 86772 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86772 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.999 killing process with pid 86772 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86772' 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@973 -- # kill 86772 00:21:36.999 15:48:18 ublk_recovery -- common/autotest_common.sh@978 -- # wait 86772 00:21:36.999 [2024-12-06 15:48:19.084429] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:36.999 [2024-12-06 15:48:19.084531] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:36.999 00:21:36.999 real 1m3.312s 00:21:36.999 user 1m44.650s 00:21:36.999 sys 0m30.881s 00:21:36.999 15:48:19 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.999 ************************************ 00:21:36.999 END TEST ublk_recovery 00:21:36.999 ************************************ 00:21:36.999 15:48:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.999 15:48:19 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:36.999 15:48:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:36.999 15:48:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:36.999 15:48:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.999 15:48:19 -- common/autotest_common.sh@10 -- # set +x 00:21:36.999 15:48:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:36.999 15:48:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:36.999 15:48:19 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:36.999 15:48:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:36.999 15:48:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:37.000 15:48:19 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:37.000 15:48:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.000 15:48:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.000 15:48:19 -- common/autotest_common.sh@10 -- # set +x 00:21:37.000 ************************************ 00:21:37.000 START TEST ftl 00:21:37.000 ************************************ 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:37.000 * Looking for test storage... 00:21:37.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.000 15:48:19 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.000 15:48:19 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.000 15:48:19 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.000 15:48:19 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.000 15:48:19 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.000 15:48:19 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:37.000 15:48:19 ftl -- scripts/common.sh@345 -- # : 1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.000 15:48:19 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.000 15:48:19 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@353 -- # local d=1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.000 15:48:19 ftl -- scripts/common.sh@355 -- # echo 1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.000 15:48:19 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@353 -- # local d=2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.000 15:48:19 ftl -- scripts/common.sh@355 -- # echo 2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.000 15:48:19 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.000 15:48:19 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.000 15:48:19 ftl -- scripts/common.sh@368 -- # return 0 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.000 --rc genhtml_branch_coverage=1 00:21:37.000 --rc genhtml_function_coverage=1 00:21:37.000 --rc genhtml_legend=1 00:21:37.000 --rc geninfo_all_blocks=1 00:21:37.000 --rc geninfo_unexecuted_blocks=1 00:21:37.000 00:21:37.000 ' 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.000 --rc genhtml_branch_coverage=1 00:21:37.000 --rc genhtml_function_coverage=1 00:21:37.000 --rc genhtml_legend=1 00:21:37.000 --rc geninfo_all_blocks=1 00:21:37.000 --rc geninfo_unexecuted_blocks=1 00:21:37.000 00:21:37.000 ' 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.000 --rc genhtml_branch_coverage=1 00:21:37.000 --rc genhtml_function_coverage=1 00:21:37.000 --rc genhtml_legend=1 00:21:37.000 --rc geninfo_all_blocks=1 00:21:37.000 --rc geninfo_unexecuted_blocks=1 00:21:37.000 00:21:37.000 ' 00:21:37.000 15:48:19 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.000 --rc genhtml_branch_coverage=1 00:21:37.000 --rc genhtml_function_coverage=1 00:21:37.000 --rc genhtml_legend=1 00:21:37.000 --rc geninfo_all_blocks=1 00:21:37.000 --rc geninfo_unexecuted_blocks=1 00:21:37.000 00:21:37.000 ' 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:37.000 15:48:19 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:37.000 15:48:19 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.000 15:48:19 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.000 15:48:19 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:37.000 15:48:19 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:37.000 15:48:19 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.000 15:48:19 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.000 15:48:19 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.000 15:48:19 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.000 15:48:19 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.000 15:48:19 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:37.000 15:48:19 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:37.000 15:48:19 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.000 15:48:19 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.000 15:48:19 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:37.000 15:48:19 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.000 15:48:19 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.000 15:48:19 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.000 15:48:19 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.000 15:48:19 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:37.000 15:48:19 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:37.000 15:48:19 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.000 15:48:19 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:37.000 15:48:19 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:37.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.000 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:37.000 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:37.000 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:37.000 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:37.000 15:48:20 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87551 00:21:37.000 15:48:20 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:37.000 15:48:20 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87551 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 87551 ']' 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.000 15:48:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:37.000 [2024-12-06 15:48:20.601243] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:21:37.000 [2024-12-06 15:48:20.601452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87551 ] 00:21:37.000 [2024-12-06 15:48:20.764070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.000 [2024-12-06 15:48:20.808071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.000 15:48:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.000 15:48:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:37.000 15:48:21 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:37.000 15:48:21 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:37.000 15:48:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:37.000 15:48:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:37.000 15:48:22 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:37.000 15:48:22 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:37.000 15:48:22 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@50 -- # break 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:37.000 15:48:23 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@63 -- # break 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@66 -- # killprocess 87551 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 87551 ']' 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@958 -- # kill -0 87551 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@959 -- # uname 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87551 00:21:37.001 killing process with pid 87551 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87551' 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@973 -- # kill 87551 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@978 -- # wait 87551 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:37.001 15:48:23 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.001 15:48:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:37.001 ************************************ 00:21:37.001 START TEST ftl_fio_basic 00:21:37.001 ************************************ 00:21:37.001 15:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:37.001 * Looking for test storage... 00:21:37.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.001 15:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:37.001 15:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:37.001 15:48:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:37.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.001 --rc genhtml_branch_coverage=1 00:21:37.001 --rc genhtml_function_coverage=1 00:21:37.001 --rc genhtml_legend=1 00:21:37.001 --rc geninfo_all_blocks=1 00:21:37.001 --rc geninfo_unexecuted_blocks=1 00:21:37.001 00:21:37.001 ' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:37.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.001 --rc genhtml_branch_coverage=1 00:21:37.001 --rc genhtml_function_coverage=1 00:21:37.001 --rc genhtml_legend=1 00:21:37.001 --rc geninfo_all_blocks=1 00:21:37.001 --rc geninfo_unexecuted_blocks=1 00:21:37.001 00:21:37.001 ' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:37.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.001 --rc genhtml_branch_coverage=1 00:21:37.001 --rc genhtml_function_coverage=1 00:21:37.001 --rc genhtml_legend=1 00:21:37.001 --rc geninfo_all_blocks=1 00:21:37.001 --rc geninfo_unexecuted_blocks=1 00:21:37.001 00:21:37.001 ' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:37.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.001 --rc genhtml_branch_coverage=1 00:21:37.001 --rc genhtml_function_coverage=1 00:21:37.001 --rc genhtml_legend=1 00:21:37.001 --rc geninfo_all_blocks=1 00:21:37.001 --rc geninfo_unexecuted_blocks=1 00:21:37.001 00:21:37.001 ' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:37.001 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87672 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87672 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 87672 ']' 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.002 15:48:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:37.002 [2024-12-06 15:48:24.178512] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:21:37.002 [2024-12-06 15:48:24.178753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87672 ] 00:21:37.002 [2024-12-06 15:48:24.332764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:37.002 [2024-12-06 15:48:24.368264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.002 [2024-12-06 15:48:24.368376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.002 [2024-12-06 15:48:24.368452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:37.002 { 00:21:37.002 "name": "nvme0n1", 00:21:37.002 "aliases": [ 00:21:37.002 "4a69af88-3810-409a-ad0f-95155663f791" 00:21:37.002 ], 00:21:37.002 "product_name": "NVMe disk", 00:21:37.002 "block_size": 4096, 00:21:37.002 "num_blocks": 1310720, 00:21:37.002 "uuid": "4a69af88-3810-409a-ad0f-95155663f791", 00:21:37.002 "numa_id": -1, 00:21:37.002 "assigned_rate_limits": { 00:21:37.002 "rw_ios_per_sec": 0, 00:21:37.002 "rw_mbytes_per_sec": 0, 00:21:37.002 "r_mbytes_per_sec": 0, 00:21:37.002 "w_mbytes_per_sec": 0 00:21:37.002 }, 00:21:37.002 "claimed": false, 00:21:37.002 "zoned": false, 00:21:37.002 "supported_io_types": { 00:21:37.002 "read": true, 00:21:37.002 "write": true, 00:21:37.002 "unmap": true, 00:21:37.002 "flush": true, 00:21:37.002 "reset": true, 00:21:37.002 "nvme_admin": true, 00:21:37.002 "nvme_io": true, 00:21:37.002 "nvme_io_md": false, 00:21:37.002 "write_zeroes": true, 00:21:37.002 "zcopy": false, 00:21:37.002 "get_zone_info": false, 00:21:37.002 "zone_management": false, 00:21:37.002 "zone_append": false, 00:21:37.002 "compare": true, 00:21:37.002 "compare_and_write": false, 00:21:37.002 "abort": true, 00:21:37.002 "seek_hole": false, 00:21:37.002 "seek_data": false, 00:21:37.002 "copy": true, 00:21:37.002 "nvme_iov_md": false 00:21:37.002 }, 00:21:37.002 "driver_specific": { 00:21:37.002 "nvme": [ 00:21:37.002 { 00:21:37.002 "pci_address": "0000:00:11.0", 00:21:37.002 "trid": { 00:21:37.002 "trtype": "PCIe", 00:21:37.002 "traddr": "0000:00:11.0" 00:21:37.002 }, 00:21:37.002 "ctrlr_data": { 00:21:37.002 "cntlid": 0, 00:21:37.002 "vendor_id": "0x1b36", 00:21:37.002 "model_number": "QEMU NVMe Ctrl", 00:21:37.002 "serial_number": "12341", 00:21:37.002 "firmware_revision": "8.0.0", 00:21:37.002 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:37.002 "oacs": { 00:21:37.002 "security": 0, 00:21:37.002 "format": 1, 00:21:37.002 "firmware": 0, 00:21:37.002 "ns_manage": 1 00:21:37.002 }, 00:21:37.002 "multi_ctrlr": false, 00:21:37.002 "ana_reporting": false 00:21:37.002 }, 00:21:37.002 "vs": { 00:21:37.002 "nvme_version": "1.4" 00:21:37.002 }, 00:21:37.002 "ns_data": { 00:21:37.002 "id": 1, 00:21:37.002 "can_share": false 00:21:37.002 } 00:21:37.002 } 00:21:37.002 ], 00:21:37.002 "mp_policy": "active_passive" 00:21:37.002 } 00:21:37.002 } 00:21:37.002 ]' 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:37.002 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:37.261 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:37.519 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:37.519 15:48:25 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:37.519 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c45ec1d8-c001-4fcc-98a2-7f911f8d9c19 00:21:37.519 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c45ec1d8-c001-4fcc-98a2-7f911f8d9c19 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:38.087 { 00:21:38.087 "name": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:38.087 "aliases": [ 00:21:38.087 "lvs/nvme0n1p0" 00:21:38.087 ], 00:21:38.087 "product_name": "Logical Volume", 00:21:38.087 "block_size": 4096, 00:21:38.087 "num_blocks": 26476544, 00:21:38.087 "uuid": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:38.087 "assigned_rate_limits": { 00:21:38.087 "rw_ios_per_sec": 0, 00:21:38.087 "rw_mbytes_per_sec": 0, 00:21:38.087 "r_mbytes_per_sec": 0, 00:21:38.087 "w_mbytes_per_sec": 0 00:21:38.087 }, 00:21:38.087 "claimed": false, 00:21:38.087 "zoned": false, 00:21:38.087 "supported_io_types": { 00:21:38.087 "read": true, 00:21:38.087 "write": true, 00:21:38.087 "unmap": true, 00:21:38.087 "flush": false, 00:21:38.087 "reset": true, 00:21:38.087 "nvme_admin": false, 00:21:38.087 "nvme_io": false, 00:21:38.087 "nvme_io_md": false, 00:21:38.087 "write_zeroes": true, 00:21:38.087 "zcopy": false, 00:21:38.087 "get_zone_info": false, 00:21:38.087 "zone_management": false, 00:21:38.087 "zone_append": false, 00:21:38.087 "compare": false, 00:21:38.087 "compare_and_write": false, 00:21:38.087 "abort": false, 00:21:38.087 "seek_hole": true, 00:21:38.087 "seek_data": true, 00:21:38.087 "copy": false, 00:21:38.087 "nvme_iov_md": false 00:21:38.087 }, 00:21:38.087 "driver_specific": { 00:21:38.087 "lvol": { 00:21:38.087 "lvol_store_uuid": "c45ec1d8-c001-4fcc-98a2-7f911f8d9c19", 00:21:38.087 "base_bdev": "nvme0n1", 00:21:38.087 "thin_provision": true, 00:21:38.087 "num_allocated_clusters": 0, 00:21:38.087 "snapshot": false, 00:21:38.087 "clone": false, 00:21:38.087 "esnap_clone": false 00:21:38.087 } 00:21:38.087 } 00:21:38.087 } 00:21:38.087 ]' 00:21:38.087 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:38.346 15:48:26 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:38.605 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:38.864 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:38.864 { 00:21:38.864 "name": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:38.864 "aliases": [ 00:21:38.864 "lvs/nvme0n1p0" 00:21:38.864 ], 00:21:38.864 "product_name": "Logical Volume", 00:21:38.864 "block_size": 4096, 00:21:38.864 "num_blocks": 26476544, 00:21:38.864 "uuid": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:38.864 "assigned_rate_limits": { 00:21:38.864 "rw_ios_per_sec": 0, 00:21:38.864 "rw_mbytes_per_sec": 0, 00:21:38.864 "r_mbytes_per_sec": 0, 00:21:38.864 "w_mbytes_per_sec": 0 00:21:38.864 }, 00:21:38.864 "claimed": false, 00:21:38.864 "zoned": false, 00:21:38.864 "supported_io_types": { 00:21:38.864 "read": true, 00:21:38.864 "write": true, 00:21:38.864 "unmap": true, 00:21:38.864 "flush": false, 00:21:38.864 "reset": true, 00:21:38.864 "nvme_admin": false, 00:21:38.864 "nvme_io": false, 00:21:38.864 "nvme_io_md": false, 00:21:38.864 "write_zeroes": true, 00:21:38.864 "zcopy": false, 00:21:38.864 "get_zone_info": false, 00:21:38.864 "zone_management": false, 00:21:38.864 "zone_append": false, 00:21:38.864 "compare": false, 00:21:38.864 "compare_and_write": false, 00:21:38.864 "abort": false, 00:21:38.864 "seek_hole": true, 00:21:38.864 "seek_data": true, 00:21:38.864 "copy": false, 00:21:38.864 "nvme_iov_md": false 00:21:38.864 }, 00:21:38.864 "driver_specific": { 00:21:38.864 "lvol": { 00:21:38.864 "lvol_store_uuid": "c45ec1d8-c001-4fcc-98a2-7f911f8d9c19", 00:21:38.864 "base_bdev": "nvme0n1", 00:21:38.864 "thin_provision": true, 00:21:38.864 "num_allocated_clusters": 0, 00:21:38.865 "snapshot": false, 00:21:38.865 "clone": false, 00:21:38.865 "esnap_clone": false 00:21:38.865 } 00:21:38.865 } 00:21:38.865 } 00:21:38.865 ]' 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:38.865 15:48:27 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:39.123 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:39.123 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca087ca5-8fac-4553-8f82-5a472b92d2f8 00:21:39.382 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:39.382 { 00:21:39.382 "name": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:39.382 "aliases": [ 00:21:39.382 "lvs/nvme0n1p0" 00:21:39.382 ], 00:21:39.382 "product_name": "Logical Volume", 00:21:39.382 "block_size": 4096, 00:21:39.382 "num_blocks": 26476544, 00:21:39.382 "uuid": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:39.382 "assigned_rate_limits": { 00:21:39.382 "rw_ios_per_sec": 0, 00:21:39.382 "rw_mbytes_per_sec": 0, 00:21:39.382 "r_mbytes_per_sec": 0, 00:21:39.382 "w_mbytes_per_sec": 0 00:21:39.382 }, 00:21:39.382 "claimed": false, 00:21:39.382 "zoned": false, 00:21:39.382 "supported_io_types": { 00:21:39.382 "read": true, 00:21:39.382 "write": true, 00:21:39.382 "unmap": true, 00:21:39.382 "flush": false, 00:21:39.382 "reset": true, 00:21:39.382 "nvme_admin": false, 00:21:39.382 "nvme_io": false, 00:21:39.382 "nvme_io_md": false, 00:21:39.382 "write_zeroes": true, 00:21:39.382 "zcopy": false, 00:21:39.382 "get_zone_info": false, 00:21:39.382 "zone_management": false, 00:21:39.382 "zone_append": false, 00:21:39.382 "compare": false, 00:21:39.382 "compare_and_write": false, 00:21:39.382 "abort": false, 00:21:39.382 "seek_hole": true, 00:21:39.382 "seek_data": true, 00:21:39.383 "copy": false, 00:21:39.383 "nvme_iov_md": false 00:21:39.383 }, 00:21:39.383 "driver_specific": { 00:21:39.383 "lvol": { 00:21:39.383 "lvol_store_uuid": "c45ec1d8-c001-4fcc-98a2-7f911f8d9c19", 00:21:39.383 "base_bdev": "nvme0n1", 00:21:39.383 "thin_provision": true, 00:21:39.383 "num_allocated_clusters": 0, 00:21:39.383 "snapshot": false, 00:21:39.383 "clone": false, 00:21:39.383 "esnap_clone": false 00:21:39.383 } 00:21:39.383 } 00:21:39.383 } 00:21:39.383 ]' 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:39.383 15:48:27 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ca087ca5-8fac-4553-8f82-5a472b92d2f8 -c nvc0n1p0 --l2p_dram_limit 60 00:21:39.643 [2024-12-06 15:48:28.238031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.238091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.643 [2024-12-06 15:48:28.238115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:39.643 [2024-12-06 15:48:28.238127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.238215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.238236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.643 [2024-12-06 15:48:28.238248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:39.643 [2024-12-06 15:48:28.238277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.238329] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.643 [2024-12-06 15:48:28.238681] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.643 [2024-12-06 15:48:28.238712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.238726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.643 [2024-12-06 15:48:28.238738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:21:39.643 [2024-12-06 15:48:28.238751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.238997] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7d159b72-4707-4f85-9cfc-6f05264c5878 00:21:39.643 [2024-12-06 15:48:28.240835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.240869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:39.643 [2024-12-06 15:48:28.240886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:39.643 [2024-12-06 15:48:28.240897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.252569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.252615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.643 [2024-12-06 15:48:28.252645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.572 ms 00:21:39.643 [2024-12-06 15:48:28.252669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.252806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.252824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.643 [2024-12-06 15:48:28.252849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:39.643 [2024-12-06 15:48:28.252871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.253007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.253026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.643 [2024-12-06 15:48:28.253040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:39.643 [2024-12-06 15:48:28.253051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.253096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.643 [2024-12-06 15:48:28.255222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.255257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.643 [2024-12-06 15:48:28.255280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:21:39.643 [2024-12-06 15:48:28.255292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.255358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.255376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.643 [2024-12-06 15:48:28.255389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:39.643 [2024-12-06 15:48:28.255404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.255442] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:39.643 [2024-12-06 15:48:28.255633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.643 [2024-12-06 15:48:28.255654] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.643 [2024-12-06 15:48:28.255672] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:39.643 [2024-12-06 15:48:28.255686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.643 [2024-12-06 15:48:28.255704] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.643 [2024-12-06 15:48:28.255716] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:39.643 [2024-12-06 15:48:28.255729] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.643 [2024-12-06 15:48:28.255739] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.643 [2024-12-06 15:48:28.255768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.643 [2024-12-06 15:48:28.255780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.255793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.643 [2024-12-06 15:48:28.255804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:21:39.643 [2024-12-06 15:48:28.255817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.255914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.643 [2024-12-06 15:48:28.255933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.643 [2024-12-06 15:48:28.255962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:39.643 [2024-12-06 15:48:28.255975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.643 [2024-12-06 15:48:28.256089] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.643 [2024-12-06 15:48:28.256107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.643 [2024-12-06 15:48:28.256119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.643 [2024-12-06 15:48:28.256175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.643 [2024-12-06 15:48:28.256207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.643 [2024-12-06 15:48:28.256241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.643 [2024-12-06 15:48:28.256254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:39.643 [2024-12-06 15:48:28.256263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.643 [2024-12-06 15:48:28.256278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.643 [2024-12-06 15:48:28.256288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:39.643 [2024-12-06 15:48:28.256299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.643 [2024-12-06 15:48:28.256321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.643 [2024-12-06 15:48:28.256352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.643 [2024-12-06 15:48:28.256395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.643 [2024-12-06 15:48:28.256426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.643 [2024-12-06 15:48:28.256462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.643 [2024-12-06 15:48:28.256483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.643 [2024-12-06 15:48:28.256498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:39.643 [2024-12-06 15:48:28.256510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.643 [2024-12-06 15:48:28.256519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.643 [2024-12-06 15:48:28.256531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:39.643 [2024-12-06 15:48:28.256540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.643 [2024-12-06 15:48:28.256552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.644 [2024-12-06 15:48:28.256561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:39.644 [2024-12-06 15:48:28.256574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.644 [2024-12-06 15:48:28.256584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.644 [2024-12-06 15:48:28.256602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:39.644 [2024-12-06 15:48:28.256612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.644 [2024-12-06 15:48:28.256625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.644 [2024-12-06 15:48:28.256635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.644 [2024-12-06 15:48:28.256650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.644 [2024-12-06 15:48:28.256662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.644 [2024-12-06 15:48:28.256675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.644 [2024-12-06 15:48:28.256685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.644 [2024-12-06 15:48:28.256697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.644 [2024-12-06 15:48:28.256707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.644 [2024-12-06 15:48:28.256719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.644 [2024-12-06 15:48:28.256729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.644 [2024-12-06 15:48:28.256766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.644 [2024-12-06 15:48:28.256787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.256802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:39.644 [2024-12-06 15:48:28.256812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:39.644 [2024-12-06 15:48:28.256825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:39.644 [2024-12-06 15:48:28.256835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:39.644 [2024-12-06 15:48:28.256847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:39.644 [2024-12-06 15:48:28.256872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:39.644 [2024-12-06 15:48:28.256888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:39.644 [2024-12-06 15:48:28.256899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:39.644 [2024-12-06 15:48:28.256912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:39.644 [2024-12-06 15:48:28.256922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.256935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.256945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.256986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.256999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:39.644 [2024-12-06 15:48:28.257012] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.644 [2024-12-06 15:48:28.257035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.257050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.644 [2024-12-06 15:48:28.257061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.644 [2024-12-06 15:48:28.257080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.644 [2024-12-06 15:48:28.257092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.644 [2024-12-06 15:48:28.257126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.644 [2024-12-06 15:48:28.257137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.644 [2024-12-06 15:48:28.257156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:21:39.644 [2024-12-06 15:48:28.257166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.644 [2024-12-06 15:48:28.257317] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:39.644 [2024-12-06 15:48:28.257345] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:42.930 [2024-12-06 15:48:31.390336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.390400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:42.931 [2024-12-06 15:48:31.390422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3133.039 ms 00:21:42.931 [2024-12-06 15:48:31.390435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.406422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.406510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.931 [2024-12-06 15:48:31.406549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.857 ms 00:21:42.931 [2024-12-06 15:48:31.406562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.406683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.406699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:42.931 [2024-12-06 15:48:31.406714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:42.931 [2024-12-06 15:48:31.406725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.430151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.430228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.931 [2024-12-06 15:48:31.430254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.300 ms 00:21:42.931 [2024-12-06 15:48:31.430265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.430329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.430345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.931 [2024-12-06 15:48:31.430361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:42.931 [2024-12-06 15:48:31.430372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.431238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.431280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.931 [2024-12-06 15:48:31.431297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:21:42.931 [2024-12-06 15:48:31.431312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.431532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.431577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.931 [2024-12-06 15:48:31.431594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:42.931 [2024-12-06 15:48:31.431606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.443218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.443273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.931 [2024-12-06 15:48:31.443292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.564 ms 00:21:42.931 [2024-12-06 15:48:31.443304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.452702] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:42.931 [2024-12-06 15:48:31.474667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.474751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:42.931 [2024-12-06 15:48:31.474771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.225 ms 00:21:42.931 [2024-12-06 15:48:31.474806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.531595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.531659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:42.931 [2024-12-06 15:48:31.531676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.722 ms 00:21:42.931 [2024-12-06 15:48:31.531694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.532004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.532036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:42.931 [2024-12-06 15:48:31.532051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:42.931 [2024-12-06 15:48:31.532065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.536018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.536082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:42.931 [2024-12-06 15:48:31.536097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.897 ms 00:21:42.931 [2024-12-06 15:48:31.536122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.539152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.539227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:42.931 [2024-12-06 15:48:31.539244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:21:42.931 [2024-12-06 15:48:31.539261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.539717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.539747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:42.931 [2024-12-06 15:48:31.539761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:21:42.931 [2024-12-06 15:48:31.539796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.580185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.580253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:42.931 [2024-12-06 15:48:31.580270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.336 ms 00:21:42.931 [2024-12-06 15:48:31.580284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.585220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.585282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:42.931 [2024-12-06 15:48:31.585298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:21:42.931 [2024-12-06 15:48:31.585311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.588836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.588894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:42.931 [2024-12-06 15:48:31.588908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:21:42.931 [2024-12-06 15:48:31.588921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.592881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.592953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.931 [2024-12-06 15:48:31.592970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.900 ms 00:21:42.931 [2024-12-06 15:48:31.592987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.593045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.593066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.931 [2024-12-06 15:48:31.593079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:42.931 [2024-12-06 15:48:31.593092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.593250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.931 [2024-12-06 15:48:31.593276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.931 [2024-12-06 15:48:31.593293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:42.931 [2024-12-06 15:48:31.593307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.931 [2024-12-06 15:48:31.595029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3356.445 ms, result 0 00:21:42.931 { 00:21:42.931 "name": "ftl0", 00:21:42.931 "uuid": "7d159b72-4707-4f85-9cfc-6f05264c5878" 00:21:42.931 } 00:21:42.931 15:48:31 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:42.931 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:42.932 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:43.189 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:43.189 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:43.189 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:43.189 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:43.189 15:48:31 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:43.447 [ 00:21:43.447 { 00:21:43.447 "name": "ftl0", 00:21:43.447 "aliases": [ 00:21:43.447 "7d159b72-4707-4f85-9cfc-6f05264c5878" 00:21:43.447 ], 00:21:43.447 "product_name": "FTL disk", 00:21:43.447 "block_size": 4096, 00:21:43.447 "num_blocks": 20971520, 00:21:43.447 "uuid": "7d159b72-4707-4f85-9cfc-6f05264c5878", 00:21:43.447 "assigned_rate_limits": { 00:21:43.447 "rw_ios_per_sec": 0, 00:21:43.447 "rw_mbytes_per_sec": 0, 00:21:43.447 "r_mbytes_per_sec": 0, 00:21:43.447 "w_mbytes_per_sec": 0 00:21:43.447 }, 00:21:43.447 "claimed": false, 00:21:43.447 "zoned": false, 00:21:43.447 "supported_io_types": { 00:21:43.447 "read": true, 00:21:43.447 "write": true, 00:21:43.447 "unmap": true, 00:21:43.447 "flush": true, 00:21:43.447 "reset": false, 00:21:43.447 "nvme_admin": false, 00:21:43.447 "nvme_io": false, 00:21:43.447 "nvme_io_md": false, 00:21:43.447 "write_zeroes": true, 00:21:43.447 "zcopy": false, 00:21:43.447 "get_zone_info": false, 00:21:43.447 "zone_management": false, 00:21:43.447 "zone_append": false, 00:21:43.447 "compare": false, 00:21:43.447 "compare_and_write": false, 00:21:43.447 "abort": false, 00:21:43.447 "seek_hole": false, 00:21:43.447 "seek_data": false, 00:21:43.447 "copy": false, 00:21:43.447 "nvme_iov_md": false 00:21:43.447 }, 00:21:43.447 "driver_specific": { 00:21:43.447 "ftl": { 00:21:43.447 "base_bdev": "ca087ca5-8fac-4553-8f82-5a472b92d2f8", 00:21:43.447 "cache": "nvc0n1p0" 00:21:43.447 } 00:21:43.447 } 00:21:43.447 } 00:21:43.447 ] 00:21:43.704 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:43.705 15:48:32 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:43.705 15:48:32 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:43.705 15:48:32 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:43.705 15:48:32 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:43.964 [2024-12-06 15:48:32.551143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.551201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:43.964 [2024-12-06 15:48:32.551221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:43.964 [2024-12-06 15:48:32.551232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.551276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:43.964 [2024-12-06 15:48:32.552245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.552320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:43.964 [2024-12-06 15:48:32.552337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:21:43.964 [2024-12-06 15:48:32.552365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.553050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.553099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:43.964 [2024-12-06 15:48:32.553126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:21:43.964 [2024-12-06 15:48:32.553141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.555766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.555813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:43.964 [2024-12-06 15:48:32.555840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:21:43.964 [2024-12-06 15:48:32.555854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.561291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.561341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:43.964 [2024-12-06 15:48:32.561368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.400 ms 00:21:43.964 [2024-12-06 15:48:32.561382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.565169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.565244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:43.964 [2024-12-06 15:48:32.565258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:21:43.964 [2024-12-06 15:48:32.565272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.570651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.570712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:43.964 [2024-12-06 15:48:32.570730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.334 ms 00:21:43.964 [2024-12-06 15:48:32.570760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.571031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.571056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:43.964 [2024-12-06 15:48:32.571070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:21:43.964 [2024-12-06 15:48:32.571083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.573239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.573295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:43.964 [2024-12-06 15:48:32.573320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:21:43.964 [2024-12-06 15:48:32.573332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.574966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.575038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:43.964 [2024-12-06 15:48:32.575051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:21:43.964 [2024-12-06 15:48:32.575066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.576300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.576364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:43.964 [2024-12-06 15:48:32.576424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:21:43.964 [2024-12-06 15:48:32.576438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.577738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.964 [2024-12-06 15:48:32.577809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:43.964 [2024-12-06 15:48:32.577822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:21:43.964 [2024-12-06 15:48:32.577835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.964 [2024-12-06 15:48:32.577880] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:43.964 [2024-12-06 15:48:32.577904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.577918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.577932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.577975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.577994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:43.964 [2024-12-06 15:48:32.578369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.578990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:43.965 [2024-12-06 15:48:32.579338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:43.965 [2024-12-06 15:48:32.579349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d159b72-4707-4f85-9cfc-6f05264c5878 00:21:43.965 [2024-12-06 15:48:32.579366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:43.965 [2024-12-06 15:48:32.579376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:43.965 [2024-12-06 15:48:32.579389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:43.965 [2024-12-06 15:48:32.579400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:43.965 [2024-12-06 15:48:32.579412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:43.965 [2024-12-06 15:48:32.579423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:43.965 [2024-12-06 15:48:32.579436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:43.965 [2024-12-06 15:48:32.579446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:43.965 [2024-12-06 15:48:32.579458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:43.965 [2024-12-06 15:48:32.579469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.965 [2024-12-06 15:48:32.579482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:43.965 [2024-12-06 15:48:32.579494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:21:43.965 [2024-12-06 15:48:32.579507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.965 [2024-12-06 15:48:32.582348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.965 [2024-12-06 15:48:32.582406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:43.965 [2024-12-06 15:48:32.582419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:21:43.965 [2024-12-06 15:48:32.582432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.965 [2024-12-06 15:48:32.582559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.965 [2024-12-06 15:48:32.582581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:43.965 [2024-12-06 15:48:32.582593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:43.965 [2024-12-06 15:48:32.582626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.965 [2024-12-06 15:48:32.592825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.965 [2024-12-06 15:48:32.592886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.966 [2024-12-06 15:48:32.592914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.592928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.593005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.593025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.966 [2024-12-06 15:48:32.593038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.593055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.593247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.593273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.966 [2024-12-06 15:48:32.593286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.593300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.593345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.593362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.966 [2024-12-06 15:48:32.593373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.593386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.608009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.608091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.966 [2024-12-06 15:48:32.608108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.608123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.619411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.619485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.966 [2024-12-06 15:48:32.619518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.619534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.619666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.619693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.966 [2024-12-06 15:48:32.619706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.619719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.619839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.619881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.966 [2024-12-06 15:48:32.619895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.619909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.620059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.620088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.966 [2024-12-06 15:48:32.620101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.620115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.620202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.620225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:43.966 [2024-12-06 15:48:32.620238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.620252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.620316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.620357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.966 [2024-12-06 15:48:32.620378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.620399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.620480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.966 [2024-12-06 15:48:32.620500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.966 [2024-12-06 15:48:32.620523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.966 [2024-12-06 15:48:32.620536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-12-06 15:48:32.620785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.574 ms, result 0 00:21:43.966 true 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87672 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 87672 ']' 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 87672 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.966 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87672 00:21:44.225 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.225 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.225 killing process with pid 87672 00:21:44.225 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87672' 00:21:44.225 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 87672 00:21:44.225 15:48:32 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 87672 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:47.519 15:48:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:47.519 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:47.519 fio-3.35 00:21:47.519 Starting 1 thread 00:21:52.788 00:21:52.788 test: (groupid=0, jobs=1): err= 0: pid=87856: Fri Dec 6 15:48:41 2024 00:21:52.788 read: IOPS=842, BW=56.0MiB/s (58.7MB/s)(255MiB/4549msec) 00:21:52.788 slat (nsec): min=5745, max=57506, avg=10899.01, stdev=4524.29 00:21:52.788 clat (usec): min=368, max=1423, avg=525.64, stdev=59.75 00:21:52.788 lat (usec): min=380, max=1432, avg=536.54, stdev=60.65 00:21:52.788 clat percentiles (usec): 00:21:52.788 | 1.00th=[ 424], 5.00th=[ 457], 10.00th=[ 469], 20.00th=[ 482], 00:21:52.788 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 529], 00:21:52.788 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 627], 00:21:52.788 | 99.00th=[ 709], 99.50th=[ 758], 99.90th=[ 922], 99.95th=[ 1418], 00:21:52.788 | 99.99th=[ 1418] 00:21:52.788 write: IOPS=848, BW=56.4MiB/s (59.1MB/s)(256MiB/4544msec); 0 zone resets 00:21:52.788 slat (usec): min=18, max=136, avg=30.59, stdev= 8.14 00:21:52.788 clat (usec): min=410, max=3422, avg=597.63, stdev=104.89 00:21:52.788 lat (usec): min=439, max=3457, avg=628.22, stdev=105.05 00:21:52.788 clat percentiles (usec): 00:21:52.788 | 1.00th=[ 482], 5.00th=[ 506], 10.00th=[ 523], 20.00th=[ 553], 00:21:52.788 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 594], 00:21:52.788 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 693], 00:21:52.788 | 99.00th=[ 873], 99.50th=[ 930], 99.90th=[ 2573], 99.95th=[ 2966], 00:21:52.788 | 99.99th=[ 3425] 00:21:52.788 bw ( KiB/s): min=55216, max=59160, per=99.93%, avg=57664.00, stdev=1556.59, samples=9 00:21:52.788 iops : min= 812, max= 870, avg=848.00, stdev=22.89, samples=9 00:21:52.788 lat (usec) : 500=20.04%, 750=78.19%, 1000=1.61% 00:21:52.788 lat (msec) : 2=0.09%, 4=0.07% 00:21:52.788 cpu : usr=98.68%, sys=0.11%, ctx=8, majf=0, minf=1326 00:21:52.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.788 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:52.788 00:21:52.788 Run status group 0 (all jobs): 00:21:52.788 READ: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=255MiB (267MB), run=4549-4549msec 00:21:52.788 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=256MiB (269MB), run=4544-4544msec 00:21:53.726 ----------------------------------------------------- 00:21:53.726 Suppressions used: 00:21:53.726 count bytes template 00:21:53.726 1 5 /usr/src/fio/parse.c 00:21:53.726 1 8 libtcmalloc_minimal.so 00:21:53.726 1 904 libcrypto.so 00:21:53.726 ----------------------------------------------------- 00:21:53.726 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:53.726 15:48:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:53.985 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:53.985 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:53.985 fio-3.35 00:21:53.985 Starting 2 threads 00:22:32.703 00:22:32.703 first_half: (groupid=0, jobs=1): err= 0: pid=87949: Fri Dec 6 15:49:16 2024 00:22:32.703 read: IOPS=1976, BW=7906KiB/s (8095kB/s)(255MiB/33084msec) 00:22:32.703 slat (nsec): min=3778, max=71204, avg=9010.23, stdev=5114.88 00:22:32.703 clat (usec): min=1090, max=328740, avg=51869.31, stdev=21283.85 00:22:32.703 lat (usec): min=1099, max=328747, avg=51878.32, stdev=21284.23 00:22:32.703 clat percentiles (msec): 00:22:32.703 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 47], 00:22:32.703 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 48], 00:22:32.703 | 70.00th=[ 49], 80.00th=[ 51], 90.00th=[ 57], 95.00th=[ 69], 00:22:32.703 | 99.00th=[ 176], 99.50th=[ 201], 99.90th=[ 259], 99.95th=[ 284], 00:22:32.703 | 99.99th=[ 321] 00:22:32.703 write: IOPS=2110, BW=8442KiB/s (8645kB/s)(256MiB/31051msec); 0 zone resets 00:22:32.703 slat (usec): min=4, max=428, avg=10.71, stdev= 8.05 00:22:32.703 clat (usec): min=559, max=120456, avg=12761.69, stdev=22060.72 00:22:32.703 lat (usec): min=609, max=120477, avg=12772.40, stdev=22061.49 00:22:32.703 clat percentiles (usec): 00:22:32.703 | 1.00th=[ 1037], 5.00th=[ 1418], 10.00th=[ 1696], 20.00th=[ 2409], 00:22:32.703 | 30.00th=[ 4015], 40.00th=[ 5669], 50.00th=[ 6521], 60.00th=[ 7308], 00:22:32.703 | 70.00th=[ 8094], 80.00th=[ 12518], 90.00th=[ 17695], 95.00th=[ 88605], 00:22:32.703 | 99.00th=[100140], 99.50th=[108528], 99.90th=[113771], 99.95th=[115868], 00:22:32.703 | 99.99th=[119014] 00:22:32.703 bw ( KiB/s): min= 416, max=41568, per=100.00%, avg=18724.57, stdev=13022.74, samples=28 00:22:32.703 iops : min= 104, max=10392, avg=4681.14, stdev=3255.69, samples=28 00:22:32.703 lat (usec) : 750=0.02%, 1000=0.36% 00:22:32.703 lat (msec) : 2=7.52%, 4=7.22%, 10=23.69%, 20=7.25%, 50=39.94% 00:22:32.703 lat (msec) : 100=12.11%, 250=1.83%, 500=0.06% 00:22:32.703 cpu : usr=98.64%, sys=0.56%, ctx=88, majf=0, minf=5557 00:22:32.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.703 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.703 issued rwts: total=65387,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.703 second_half: (groupid=0, jobs=1): err= 0: pid=87950: Fri Dec 6 15:49:16 2024 00:22:32.703 read: IOPS=1966, BW=7866KiB/s (8055kB/s)(255MiB/33149msec) 00:22:32.703 slat (nsec): min=3907, max=71583, avg=10749.20, stdev=5200.78 00:22:32.703 clat (usec): min=996, max=333367, avg=51528.06, stdev=20727.79 00:22:32.703 lat (usec): min=1007, max=333383, avg=51538.81, stdev=20728.18 00:22:32.703 clat percentiles (msec): 00:22:32.703 | 1.00th=[ 16], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 47], 00:22:32.703 | 30.00th=[ 47], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 48], 00:22:32.703 | 70.00th=[ 49], 80.00th=[ 51], 90.00th=[ 57], 95.00th=[ 74], 00:22:32.703 | 99.00th=[ 169], 99.50th=[ 194], 99.90th=[ 218], 99.95th=[ 224], 00:22:32.703 | 99.99th=[ 326] 00:22:32.703 write: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(256MiB/25396msec); 0 zone resets 00:22:32.703 slat (usec): min=4, max=440, avg=11.54, stdev= 7.28 00:22:32.703 clat (usec): min=556, max=120867, avg=13453.08, stdev=22918.83 00:22:32.703 lat (usec): min=571, max=120885, avg=13464.62, stdev=22919.09 00:22:32.703 clat percentiles (usec): 00:22:32.703 | 1.00th=[ 1106], 5.00th=[ 1401], 10.00th=[ 1582], 20.00th=[ 1860], 00:22:32.703 | 30.00th=[ 2180], 40.00th=[ 3851], 50.00th=[ 6325], 60.00th=[ 7701], 00:22:32.703 | 70.00th=[ 9241], 80.00th=[ 13960], 90.00th=[ 40109], 95.00th=[ 87557], 00:22:32.703 | 99.00th=[101188], 99.50th=[108528], 99.90th=[113771], 99.95th=[115868], 00:22:32.703 | 99.99th=[119014] 00:22:32.703 bw ( KiB/s): min= 832, max=45904, per=100.00%, avg=20973.00, stdev=11748.35, samples=25 00:22:32.703 iops : min= 208, max=11476, avg=5243.24, stdev=2937.08, samples=25 00:22:32.703 lat (usec) : 750=0.02%, 1000=0.23% 00:22:32.703 lat (msec) : 2=12.13%, 4=8.22%, 10=15.69%, 20=8.97%, 50=40.91% 00:22:32.703 lat (msec) : 100=11.61%, 250=2.22%, 500=0.01% 00:22:32.703 cpu : usr=98.98%, sys=0.24%, ctx=62, majf=0, minf=5587 00:22:32.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.703 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.703 issued rwts: total=65189,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.703 00:22:32.703 Run status group 0 (all jobs): 00:22:32.703 READ: bw=15.4MiB/s (16.1MB/s), 7866KiB/s-7906KiB/s (8055kB/s-8095kB/s), io=510MiB (535MB), run=33084-33149msec 00:22:32.704 WRITE: bw=16.5MiB/s (17.3MB/s), 8442KiB/s-10.1MiB/s (8645kB/s-10.6MB/s), io=512MiB (537MB), run=25396-31051msec 00:22:32.704 ----------------------------------------------------- 00:22:32.704 Suppressions used: 00:22:32.704 count bytes template 00:22:32.704 2 10 /usr/src/fio/parse.c 00:22:32.704 4 384 /usr/src/fio/iolog.c 00:22:32.704 1 8 libtcmalloc_minimal.so 00:22:32.704 1 904 libcrypto.so 00:22:32.704 ----------------------------------------------------- 00:22:32.704 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:32.704 15:49:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:32.704 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:32.704 fio-3.35 00:22:32.704 Starting 1 thread 00:22:44.999 00:22:44.999 test: (groupid=0, jobs=1): err= 0: pid=88341: Fri Dec 6 15:49:32 2024 00:22:44.999 read: IOPS=7187, BW=28.1MiB/s (29.4MB/s)(255MiB/9072msec) 00:22:44.999 slat (nsec): min=3587, max=61756, avg=5446.58, stdev=1644.38 00:22:44.999 clat (usec): min=605, max=34776, avg=17799.03, stdev=836.59 00:22:44.999 lat (usec): min=613, max=34783, avg=17804.48, stdev=836.61 00:22:44.999 clat percentiles (usec): 00:22:44.999 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17171], 20.00th=[17433], 00:22:44.999 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17695], 60.00th=[17695], 00:22:44.999 | 70.00th=[17957], 80.00th=[17957], 90.00th=[18482], 95.00th=[18744], 00:22:44.999 | 99.00th=[20841], 99.50th=[20841], 99.90th=[26346], 99.95th=[30802], 00:22:44.999 | 99.99th=[34341] 00:22:44.999 write: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(256MiB/4967msec); 0 zone resets 00:22:44.999 slat (usec): min=4, max=325, avg= 7.62, stdev= 4.15 00:22:44.999 clat (usec): min=592, max=55932, avg=9648.94, stdev=11567.67 00:22:44.999 lat (usec): min=604, max=55939, avg=9656.56, stdev=11567.69 00:22:44.999 clat percentiles (usec): 00:22:44.999 | 1.00th=[ 865], 5.00th=[ 1045], 10.00th=[ 1123], 20.00th=[ 1221], 00:22:44.999 | 30.00th=[ 1336], 40.00th=[ 1647], 50.00th=[ 6652], 60.00th=[ 7701], 00:22:44.999 | 70.00th=[ 9110], 80.00th=[11469], 90.00th=[34341], 95.00th=[35914], 00:22:44.999 | 99.00th=[37487], 99.50th=[38536], 99.90th=[40633], 99.95th=[45876], 00:22:44.999 | 99.99th=[52691] 00:22:44.999 bw ( KiB/s): min=43992, max=68248, per=99.34%, avg=52428.80, stdev=8474.16, samples=10 00:22:44.999 iops : min=10998, max=17062, avg=13107.20, stdev=2118.54, samples=10 00:22:44.999 lat (usec) : 750=0.10%, 1000=1.55% 00:22:44.999 lat (msec) : 2=18.89%, 4=0.54%, 10=15.94%, 20=54.03%, 50=8.93% 00:22:44.999 lat (msec) : 100=0.01% 00:22:44.999 cpu : usr=98.72%, sys=0.63%, ctx=30, majf=0, minf=5577 00:22:44.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:44.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.999 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.999 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.999 00:22:44.999 Run status group 0 (all jobs): 00:22:44.999 READ: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=255MiB (267MB), run=9072-9072msec 00:22:44.999 WRITE: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=256MiB (268MB), run=4967-4967msec 00:22:45.258 ----------------------------------------------------- 00:22:45.258 Suppressions used: 00:22:45.258 count bytes template 00:22:45.258 1 5 /usr/src/fio/parse.c 00:22:45.258 2 192 /usr/src/fio/iolog.c 00:22:45.258 1 8 libtcmalloc_minimal.so 00:22:45.258 1 904 libcrypto.so 00:22:45.258 ----------------------------------------------------- 00:22:45.258 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:45.258 Remove shared memory files 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70213 /dev/shm/spdk_tgt_trace.pid86634 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:45.258 00:22:45.258 real 1m10.008s 00:22:45.258 user 2m41.789s 00:22:45.258 sys 0m4.234s 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.258 ************************************ 00:22:45.258 END TEST ftl_fio_basic 00:22:45.258 15:49:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 ************************************ 00:22:45.258 15:49:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:45.258 15:49:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:45.258 15:49:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.258 15:49:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 ************************************ 00:22:45.258 START TEST ftl_bdevperf 00:22:45.258 ************************************ 00:22:45.258 15:49:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:45.518 * Looking for test storage... 00:22:45.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.518 15:49:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:45.518 15:49:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:45.518 15:49:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:45.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.518 --rc genhtml_branch_coverage=1 00:22:45.518 --rc genhtml_function_coverage=1 00:22:45.518 --rc genhtml_legend=1 00:22:45.518 --rc geninfo_all_blocks=1 00:22:45.518 --rc geninfo_unexecuted_blocks=1 00:22:45.518 00:22:45.518 ' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:45.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.518 --rc genhtml_branch_coverage=1 00:22:45.518 --rc genhtml_function_coverage=1 00:22:45.518 --rc genhtml_legend=1 00:22:45.518 --rc geninfo_all_blocks=1 00:22:45.518 --rc geninfo_unexecuted_blocks=1 00:22:45.518 00:22:45.518 ' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:45.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.518 --rc genhtml_branch_coverage=1 00:22:45.518 --rc genhtml_function_coverage=1 00:22:45.518 --rc genhtml_legend=1 00:22:45.518 --rc geninfo_all_blocks=1 00:22:45.518 --rc geninfo_unexecuted_blocks=1 00:22:45.518 00:22:45.518 ' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:45.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.518 --rc genhtml_branch_coverage=1 00:22:45.518 --rc genhtml_function_coverage=1 00:22:45.518 --rc genhtml_legend=1 00:22:45.518 --rc geninfo_all_blocks=1 00:22:45.518 --rc geninfo_unexecuted_blocks=1 00:22:45.518 00:22:45.518 ' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:45.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88574 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88574 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88574 ']' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.518 15:49:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 [2024-12-06 15:49:34.215393] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:22:45.776 [2024-12-06 15:49:34.215871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88574 ] 00:22:45.776 [2024-12-06 15:49:34.379289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.776 [2024-12-06 15:49:34.419657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:46.709 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:46.967 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:47.224 { 00:22:47.224 "name": "nvme0n1", 00:22:47.224 "aliases": [ 00:22:47.224 "8cb2a079-d15d-437b-99a9-a6fc93a71565" 00:22:47.224 ], 00:22:47.224 "product_name": "NVMe disk", 00:22:47.224 "block_size": 4096, 00:22:47.224 "num_blocks": 1310720, 00:22:47.224 "uuid": "8cb2a079-d15d-437b-99a9-a6fc93a71565", 00:22:47.224 "numa_id": -1, 00:22:47.224 "assigned_rate_limits": { 00:22:47.224 "rw_ios_per_sec": 0, 00:22:47.224 "rw_mbytes_per_sec": 0, 00:22:47.224 "r_mbytes_per_sec": 0, 00:22:47.224 "w_mbytes_per_sec": 0 00:22:47.224 }, 00:22:47.224 "claimed": true, 00:22:47.224 "claim_type": "read_many_write_one", 00:22:47.224 "zoned": false, 00:22:47.224 "supported_io_types": { 00:22:47.224 "read": true, 00:22:47.224 "write": true, 00:22:47.224 "unmap": true, 00:22:47.224 "flush": true, 00:22:47.224 "reset": true, 00:22:47.224 "nvme_admin": true, 00:22:47.224 "nvme_io": true, 00:22:47.224 "nvme_io_md": false, 00:22:47.224 "write_zeroes": true, 00:22:47.224 "zcopy": false, 00:22:47.224 "get_zone_info": false, 00:22:47.224 "zone_management": false, 00:22:47.224 "zone_append": false, 00:22:47.224 "compare": true, 00:22:47.224 "compare_and_write": false, 00:22:47.224 "abort": true, 00:22:47.224 "seek_hole": false, 00:22:47.224 "seek_data": false, 00:22:47.224 "copy": true, 00:22:47.224 "nvme_iov_md": false 00:22:47.224 }, 00:22:47.224 "driver_specific": { 00:22:47.224 "nvme": [ 00:22:47.224 { 00:22:47.224 "pci_address": "0000:00:11.0", 00:22:47.224 "trid": { 00:22:47.224 "trtype": "PCIe", 00:22:47.224 "traddr": "0000:00:11.0" 00:22:47.224 }, 00:22:47.224 "ctrlr_data": { 00:22:47.224 "cntlid": 0, 00:22:47.224 "vendor_id": "0x1b36", 00:22:47.224 "model_number": "QEMU NVMe Ctrl", 00:22:47.224 "serial_number": "12341", 00:22:47.224 "firmware_revision": "8.0.0", 00:22:47.224 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.224 "oacs": { 00:22:47.224 "security": 0, 00:22:47.224 "format": 1, 00:22:47.224 "firmware": 0, 00:22:47.224 "ns_manage": 1 00:22:47.224 }, 00:22:47.224 "multi_ctrlr": false, 00:22:47.224 "ana_reporting": false 00:22:47.224 }, 00:22:47.224 "vs": { 00:22:47.224 "nvme_version": "1.4" 00:22:47.224 }, 00:22:47.224 "ns_data": { 00:22:47.224 "id": 1, 00:22:47.224 "can_share": false 00:22:47.224 } 00:22:47.224 } 00:22:47.224 ], 00:22:47.224 "mp_policy": "active_passive" 00:22:47.224 } 00:22:47.224 } 00:22:47.224 ]' 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:47.224 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.225 15:49:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.789 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c45ec1d8-c001-4fcc-98a2-7f911f8d9c19 00:22:47.789 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.789 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c45ec1d8-c001-4fcc-98a2-7f911f8d9c19 00:22:48.047 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:48.305 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ab6b9624-0a6a-4c40-973c-ae207c035d1c 00:22:48.305 15:49:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ab6b9624-0a6a-4c40-973c-ae207c035d1c 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:48.563 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:48.821 { 00:22:48.821 "name": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:48.821 "aliases": [ 00:22:48.821 "lvs/nvme0n1p0" 00:22:48.821 ], 00:22:48.821 "product_name": "Logical Volume", 00:22:48.821 "block_size": 4096, 00:22:48.821 "num_blocks": 26476544, 00:22:48.821 "uuid": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:48.821 "assigned_rate_limits": { 00:22:48.821 "rw_ios_per_sec": 0, 00:22:48.821 "rw_mbytes_per_sec": 0, 00:22:48.821 "r_mbytes_per_sec": 0, 00:22:48.821 "w_mbytes_per_sec": 0 00:22:48.821 }, 00:22:48.821 "claimed": false, 00:22:48.821 "zoned": false, 00:22:48.821 "supported_io_types": { 00:22:48.821 "read": true, 00:22:48.821 "write": true, 00:22:48.821 "unmap": true, 00:22:48.821 "flush": false, 00:22:48.821 "reset": true, 00:22:48.821 "nvme_admin": false, 00:22:48.821 "nvme_io": false, 00:22:48.821 "nvme_io_md": false, 00:22:48.821 "write_zeroes": true, 00:22:48.821 "zcopy": false, 00:22:48.821 "get_zone_info": false, 00:22:48.821 "zone_management": false, 00:22:48.821 "zone_append": false, 00:22:48.821 "compare": false, 00:22:48.821 "compare_and_write": false, 00:22:48.821 "abort": false, 00:22:48.821 "seek_hole": true, 00:22:48.821 "seek_data": true, 00:22:48.821 "copy": false, 00:22:48.821 "nvme_iov_md": false 00:22:48.821 }, 00:22:48.821 "driver_specific": { 00:22:48.821 "lvol": { 00:22:48.821 "lvol_store_uuid": "ab6b9624-0a6a-4c40-973c-ae207c035d1c", 00:22:48.821 "base_bdev": "nvme0n1", 00:22:48.821 "thin_provision": true, 00:22:48.821 "num_allocated_clusters": 0, 00:22:48.821 "snapshot": false, 00:22:48.821 "clone": false, 00:22:48.821 "esnap_clone": false 00:22:48.821 } 00:22:48.821 } 00:22:48.821 } 00:22:48.821 ]' 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:48.821 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:49.408 15:49:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.408 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:49.408 { 00:22:49.408 "name": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:49.408 "aliases": [ 00:22:49.408 "lvs/nvme0n1p0" 00:22:49.408 ], 00:22:49.408 "product_name": "Logical Volume", 00:22:49.408 "block_size": 4096, 00:22:49.408 "num_blocks": 26476544, 00:22:49.408 "uuid": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:49.408 "assigned_rate_limits": { 00:22:49.408 "rw_ios_per_sec": 0, 00:22:49.408 "rw_mbytes_per_sec": 0, 00:22:49.408 "r_mbytes_per_sec": 0, 00:22:49.408 "w_mbytes_per_sec": 0 00:22:49.408 }, 00:22:49.408 "claimed": false, 00:22:49.408 "zoned": false, 00:22:49.408 "supported_io_types": { 00:22:49.408 "read": true, 00:22:49.408 "write": true, 00:22:49.408 "unmap": true, 00:22:49.408 "flush": false, 00:22:49.408 "reset": true, 00:22:49.408 "nvme_admin": false, 00:22:49.408 "nvme_io": false, 00:22:49.408 "nvme_io_md": false, 00:22:49.408 "write_zeroes": true, 00:22:49.408 "zcopy": false, 00:22:49.408 "get_zone_info": false, 00:22:49.408 "zone_management": false, 00:22:49.408 "zone_append": false, 00:22:49.408 "compare": false, 00:22:49.408 "compare_and_write": false, 00:22:49.408 "abort": false, 00:22:49.408 "seek_hole": true, 00:22:49.408 "seek_data": true, 00:22:49.408 "copy": false, 00:22:49.408 "nvme_iov_md": false 00:22:49.408 }, 00:22:49.408 "driver_specific": { 00:22:49.408 "lvol": { 00:22:49.408 "lvol_store_uuid": "ab6b9624-0a6a-4c40-973c-ae207c035d1c", 00:22:49.408 "base_bdev": "nvme0n1", 00:22:49.408 "thin_provision": true, 00:22:49.408 "num_allocated_clusters": 0, 00:22:49.408 "snapshot": false, 00:22:49.408 "clone": false, 00:22:49.408 "esnap_clone": false 00:22:49.408 } 00:22:49.408 } 00:22:49.408 } 00:22:49.408 ]' 00:22:49.408 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:49.408 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:49.408 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:49.667 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:49.667 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:49.667 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:49.667 15:49:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:49.667 15:49:38 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afc88700-f0cf-4330-ba65-71f9ec8e4479 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:49.926 { 00:22:49.926 "name": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:49.926 "aliases": [ 00:22:49.926 "lvs/nvme0n1p0" 00:22:49.926 ], 00:22:49.926 "product_name": "Logical Volume", 00:22:49.926 "block_size": 4096, 00:22:49.926 "num_blocks": 26476544, 00:22:49.926 "uuid": "afc88700-f0cf-4330-ba65-71f9ec8e4479", 00:22:49.926 "assigned_rate_limits": { 00:22:49.926 "rw_ios_per_sec": 0, 00:22:49.926 "rw_mbytes_per_sec": 0, 00:22:49.926 "r_mbytes_per_sec": 0, 00:22:49.926 "w_mbytes_per_sec": 0 00:22:49.926 }, 00:22:49.926 "claimed": false, 00:22:49.926 "zoned": false, 00:22:49.926 "supported_io_types": { 00:22:49.926 "read": true, 00:22:49.926 "write": true, 00:22:49.926 "unmap": true, 00:22:49.926 "flush": false, 00:22:49.926 "reset": true, 00:22:49.926 "nvme_admin": false, 00:22:49.926 "nvme_io": false, 00:22:49.926 "nvme_io_md": false, 00:22:49.926 "write_zeroes": true, 00:22:49.926 "zcopy": false, 00:22:49.926 "get_zone_info": false, 00:22:49.926 "zone_management": false, 00:22:49.926 "zone_append": false, 00:22:49.926 "compare": false, 00:22:49.926 "compare_and_write": false, 00:22:49.926 "abort": false, 00:22:49.926 "seek_hole": true, 00:22:49.926 "seek_data": true, 00:22:49.926 "copy": false, 00:22:49.926 "nvme_iov_md": false 00:22:49.926 }, 00:22:49.926 "driver_specific": { 00:22:49.926 "lvol": { 00:22:49.926 "lvol_store_uuid": "ab6b9624-0a6a-4c40-973c-ae207c035d1c", 00:22:49.926 "base_bdev": "nvme0n1", 00:22:49.926 "thin_provision": true, 00:22:49.926 "num_allocated_clusters": 0, 00:22:49.926 "snapshot": false, 00:22:49.926 "clone": false, 00:22:49.926 "esnap_clone": false 00:22:49.926 } 00:22:49.926 } 00:22:49.926 } 00:22:49.926 ]' 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:49.926 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:50.186 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:50.186 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:50.186 15:49:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:50.186 15:49:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:50.186 15:49:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d afc88700-f0cf-4330-ba65-71f9ec8e4479 -c nvc0n1p0 --l2p_dram_limit 20 00:22:50.186 [2024-12-06 15:49:38.834346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.834541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.186 [2024-12-06 15:49:38.834582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.186 [2024-12-06 15:49:38.834597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.834678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.834696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.186 [2024-12-06 15:49:38.834715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:50.186 [2024-12-06 15:49:38.834727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.834761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.186 [2024-12-06 15:49:38.835059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.186 [2024-12-06 15:49:38.835102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.835115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.186 [2024-12-06 15:49:38.835135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:22:50.186 [2024-12-06 15:49:38.835146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.835264] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c9815a5d-2076-4f8d-99a7-79476d971c49 00:22:50.186 [2024-12-06 15:49:38.837504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.837684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:50.186 [2024-12-06 15:49:38.837711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:50.186 [2024-12-06 15:49:38.837727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.850038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.850090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.186 [2024-12-06 15:49:38.850107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.255 ms 00:22:50.186 [2024-12-06 15:49:38.850124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.850227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.850249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.186 [2024-12-06 15:49:38.850266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:50.186 [2024-12-06 15:49:38.850279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.850397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.850419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.186 [2024-12-06 15:49:38.850433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:50.186 [2024-12-06 15:49:38.850447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.850484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.186 [2024-12-06 15:49:38.852994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.853031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.186 [2024-12-06 15:49:38.853053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:22:50.186 [2024-12-06 15:49:38.853063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.853106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.853121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.186 [2024-12-06 15:49:38.853139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:50.186 [2024-12-06 15:49:38.853149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.853171] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:50.186 [2024-12-06 15:49:38.853328] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.186 [2024-12-06 15:49:38.853351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.186 [2024-12-06 15:49:38.853365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.186 [2024-12-06 15:49:38.853382] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.186 [2024-12-06 15:49:38.853394] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.186 [2024-12-06 15:49:38.853409] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.186 [2024-12-06 15:49:38.853419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.186 [2024-12-06 15:49:38.853432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.186 [2024-12-06 15:49:38.853444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.186 [2024-12-06 15:49:38.853458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.853468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.186 [2024-12-06 15:49:38.853481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:22:50.186 [2024-12-06 15:49:38.853491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.853585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.186 [2024-12-06 15:49:38.853598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.186 [2024-12-06 15:49:38.853627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:50.186 [2024-12-06 15:49:38.853647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.186 [2024-12-06 15:49:38.853751] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.186 [2024-12-06 15:49:38.853775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.186 [2024-12-06 15:49:38.853792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.186 [2024-12-06 15:49:38.853803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.186 [2024-12-06 15:49:38.853817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.186 [2024-12-06 15:49:38.853827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.186 [2024-12-06 15:49:38.853840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.186 [2024-12-06 15:49:38.853850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.186 [2024-12-06 15:49:38.853862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.186 [2024-12-06 15:49:38.853871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.186 [2024-12-06 15:49:38.853884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.186 [2024-12-06 15:49:38.853893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.186 [2024-12-06 15:49:38.853909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.186 [2024-12-06 15:49:38.853919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.186 [2024-12-06 15:49:38.853946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.186 [2024-12-06 15:49:38.853956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.186 [2024-12-06 15:49:38.854003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.186 [2024-12-06 15:49:38.854019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.186 [2024-12-06 15:49:38.854035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.187 [2024-12-06 15:49:38.854060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.187 [2024-12-06 15:49:38.854095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.187 [2024-12-06 15:49:38.854131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.187 [2024-12-06 15:49:38.854167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.187 [2024-12-06 15:49:38.854203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.187 [2024-12-06 15:49:38.854226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.187 [2024-12-06 15:49:38.854236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.187 [2024-12-06 15:49:38.854251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.187 [2024-12-06 15:49:38.854261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.187 [2024-12-06 15:49:38.854274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.187 [2024-12-06 15:49:38.854284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.187 [2024-12-06 15:49:38.854308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.187 [2024-12-06 15:49:38.854321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.187 [2024-12-06 15:49:38.854361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.187 [2024-12-06 15:49:38.854386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.187 [2024-12-06 15:49:38.854410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.187 [2024-12-06 15:49:38.854424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.187 [2024-12-06 15:49:38.854434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.187 [2024-12-06 15:49:38.854448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.187 [2024-12-06 15:49:38.854459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.187 [2024-12-06 15:49:38.854471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.187 [2024-12-06 15:49:38.854484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.187 [2024-12-06 15:49:38.854500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.187 [2024-12-06 15:49:38.854525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.187 [2024-12-06 15:49:38.854536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.187 [2024-12-06 15:49:38.854549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.187 [2024-12-06 15:49:38.854560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.187 [2024-12-06 15:49:38.854575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.187 [2024-12-06 15:49:38.854586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.187 [2024-12-06 15:49:38.854612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.187 [2024-12-06 15:49:38.854623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.187 [2024-12-06 15:49:38.854636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.187 [2024-12-06 15:49:38.854700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.187 [2024-12-06 15:49:38.854728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.187 [2024-12-06 15:49:38.854754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.187 [2024-12-06 15:49:38.854764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.187 [2024-12-06 15:49:38.854778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.187 [2024-12-06 15:49:38.854790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.187 [2024-12-06 15:49:38.854814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.187 [2024-12-06 15:49:38.854828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:22:50.187 [2024-12-06 15:49:38.854841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.187 [2024-12-06 15:49:38.854910] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:50.187 [2024-12-06 15:49:38.854942] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:53.476 [2024-12-06 15:49:41.529381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.529478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:53.476 [2024-12-06 15:49:41.529516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2674.484 ms 00:22:53.476 [2024-12-06 15:49:41.529531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.545503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.545575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.476 [2024-12-06 15:49:41.545600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.842 ms 00:22:53.476 [2024-12-06 15:49:41.545619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.545748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.545769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.476 [2024-12-06 15:49:41.545786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:53.476 [2024-12-06 15:49:41.545799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.568388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.568453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.476 [2024-12-06 15:49:41.568487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.531 ms 00:22:53.476 [2024-12-06 15:49:41.568502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.568546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.568568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.476 [2024-12-06 15:49:41.568580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.476 [2024-12-06 15:49:41.568603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.569526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.569725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.476 [2024-12-06 15:49:41.569850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:22:53.476 [2024-12-06 15:49:41.569988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.570196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.570233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.476 [2024-12-06 15:49:41.570251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:53.476 [2024-12-06 15:49:41.570265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.580557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.580734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.476 [2024-12-06 15:49:41.580760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.267 ms 00:22:53.476 [2024-12-06 15:49:41.580775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.590123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:53.476 [2024-12-06 15:49:41.598917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.599096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.476 [2024-12-06 15:49:41.599129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.037 ms 00:22:53.476 [2024-12-06 15:49:41.599142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.663302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.476 [2024-12-06 15:49:41.663625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:53.476 [2024-12-06 15:49:41.663755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.114 ms 00:22:53.476 [2024-12-06 15:49:41.663808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.476 [2024-12-06 15:49:41.664147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.664304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.477 [2024-12-06 15:49:41.664455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:22:53.477 [2024-12-06 15:49:41.664622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.668206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.668383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:53.477 [2024-12-06 15:49:41.668521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:22:53.477 [2024-12-06 15:49:41.668636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.671499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.671667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:53.477 [2024-12-06 15:49:41.671799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:22:53.477 [2024-12-06 15:49:41.671961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.672486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.672643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.477 [2024-12-06 15:49:41.672754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:22:53.477 [2024-12-06 15:49:41.672861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.703479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.703662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:53.477 [2024-12-06 15:49:41.703776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.537 ms 00:22:53.477 [2024-12-06 15:49:41.703824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.709406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.709565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:53.477 [2024-12-06 15:49:41.709678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:22:53.477 [2024-12-06 15:49:41.709725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.713092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.713233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:53.477 [2024-12-06 15:49:41.713358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:22:53.477 [2024-12-06 15:49:41.713403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.717099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.717258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.477 [2024-12-06 15:49:41.717367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:22:53.477 [2024-12-06 15:49:41.717411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.717488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.717671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.477 [2024-12-06 15:49:41.717731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:53.477 [2024-12-06 15:49:41.717768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.717882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.477 [2024-12-06 15:49:41.717949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.477 [2024-12-06 15:49:41.717998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:53.477 [2024-12-06 15:49:41.718096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.477 [2024-12-06 15:49:41.719775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2884.865 ms, result 0 00:22:53.477 { 00:22:53.477 "name": "ftl0", 00:22:53.477 "uuid": "c9815a5d-2076-4f8d-99a7-79476d971c49" 00:22:53.477 } 00:22:53.477 15:49:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:53.477 15:49:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:53.477 15:49:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:53.477 15:49:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:53.477 [2024-12-06 15:49:42.149181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:53.477 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:53.477 Zero copy mechanism will not be used. 00:22:53.477 Running I/O for 4 seconds... 00:22:55.793 1606.00 IOPS, 106.65 MiB/s [2024-12-06T15:49:45.423Z] 1585.00 IOPS, 105.25 MiB/s [2024-12-06T15:49:46.360Z] 1579.33 IOPS, 104.88 MiB/s 00:22:57.667 Latency(us) 00:22:57.667 [2024-12-06T15:49:46.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.667 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:57.667 ftl0 : 4.00 1548.92 102.86 0.00 0.00 674.46 303.48 4855.62 00:22:57.667 [2024-12-06T15:49:46.360Z] =================================================================================================================== 00:22:57.667 [2024-12-06T15:49:46.360Z] Total : 1548.92 102.86 0.00 0.00 674.46 303.48 4855.62 00:22:57.667 [2024-12-06 15:49:46.155552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:57.667 { 00:22:57.667 "results": [ 00:22:57.667 { 00:22:57.667 "job": "ftl0", 00:22:57.667 "core_mask": "0x1", 00:22:57.667 "workload": "randwrite", 00:22:57.667 "status": "finished", 00:22:57.667 "queue_depth": 1, 00:22:57.667 "io_size": 69632, 00:22:57.667 "runtime": 4.000219, 00:22:57.667 "iops": 1548.91519689297, 00:22:57.667 "mibps": 102.85764979367379, 00:22:57.667 "io_failed": 0, 00:22:57.667 "io_timeout": 0, 00:22:57.667 "avg_latency_us": 674.4618158342626, 00:22:57.667 "min_latency_us": 303.47636363636366, 00:22:57.667 "max_latency_us": 4855.6218181818185 00:22:57.667 } 00:22:57.667 ], 00:22:57.667 "core_count": 1 00:22:57.667 } 00:22:57.667 15:49:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:57.667 [2024-12-06 15:49:46.301147] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:57.667 Running I/O for 4 seconds... 00:23:00.000 6998.00 IOPS, 27.34 MiB/s [2024-12-06T15:49:49.630Z] 6516.00 IOPS, 25.45 MiB/s [2024-12-06T15:49:50.569Z] 6599.00 IOPS, 25.78 MiB/s [2024-12-06T15:49:50.569Z] 6648.25 IOPS, 25.97 MiB/s 00:23:01.876 Latency(us) 00:23:01.876 [2024-12-06T15:49:50.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.876 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:01.876 ftl0 : 4.02 6640.64 25.94 0.00 0.00 19215.00 363.05 65297.69 00:23:01.876 [2024-12-06T15:49:50.569Z] =================================================================================================================== 00:23:01.876 [2024-12-06T15:49:50.569Z] Total : 6640.64 25.94 0.00 0.00 19215.00 0.00 65297.69 00:23:01.876 [2024-12-06 15:49:50.331735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:01.876 { 00:23:01.876 "results": [ 00:23:01.876 { 00:23:01.876 "job": "ftl0", 00:23:01.876 "core_mask": "0x1", 00:23:01.876 "workload": "randwrite", 00:23:01.876 "status": "finished", 00:23:01.876 "queue_depth": 128, 00:23:01.876 "io_size": 4096, 00:23:01.876 "runtime": 4.023408, 00:23:01.876 "iops": 6640.638980685031, 00:23:01.876 "mibps": 25.939996018300903, 00:23:01.876 "io_failed": 0, 00:23:01.876 "io_timeout": 0, 00:23:01.876 "avg_latency_us": 19215.002276980453, 00:23:01.876 "min_latency_us": 363.05454545454546, 00:23:01.876 "max_latency_us": 65297.68727272727 00:23:01.876 } 00:23:01.876 ], 00:23:01.876 "core_count": 1 00:23:01.876 } 00:23:01.876 15:49:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:01.876 [2024-12-06 15:49:50.490257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:01.876 Running I/O for 4 seconds... 00:23:04.186 5323.00 IOPS, 20.79 MiB/s [2024-12-06T15:49:53.815Z] 5313.00 IOPS, 20.75 MiB/s [2024-12-06T15:49:54.751Z] 5294.33 IOPS, 20.68 MiB/s [2024-12-06T15:49:54.751Z] 5285.25 IOPS, 20.65 MiB/s 00:23:06.058 Latency(us) 00:23:06.058 [2024-12-06T15:49:54.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.058 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:06.058 Verification LBA range: start 0x0 length 0x1400000 00:23:06.058 ftl0 : 4.01 5297.34 20.69 0.00 0.00 24073.91 366.78 25856.93 00:23:06.058 [2024-12-06T15:49:54.751Z] =================================================================================================================== 00:23:06.058 [2024-12-06T15:49:54.751Z] Total : 5297.34 20.69 0.00 0.00 24073.91 0.00 25856.93 00:23:06.058 [2024-12-06 15:49:54.513766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:06.058 { 00:23:06.058 "results": [ 00:23:06.058 { 00:23:06.058 "job": "ftl0", 00:23:06.058 "core_mask": "0x1", 00:23:06.058 "workload": "verify", 00:23:06.058 "status": "finished", 00:23:06.058 "verify_range": { 00:23:06.058 "start": 0, 00:23:06.058 "length": 20971520 00:23:06.058 }, 00:23:06.058 "queue_depth": 128, 00:23:06.058 "io_size": 4096, 00:23:06.058 "runtime": 4.014657, 00:23:06.058 "iops": 5297.339224745726, 00:23:06.058 "mibps": 20.692731346662992, 00:23:06.058 "io_failed": 0, 00:23:06.058 "io_timeout": 0, 00:23:06.058 "avg_latency_us": 24073.912406844578, 00:23:06.058 "min_latency_us": 366.7781818181818, 00:23:06.058 "max_latency_us": 25856.93090909091 00:23:06.058 } 00:23:06.058 ], 00:23:06.058 "core_count": 1 00:23:06.058 } 00:23:06.058 15:49:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:06.318 [2024-12-06 15:49:54.774143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.774376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:06.318 [2024-12-06 15:49:54.774412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.318 [2024-12-06 15:49:54.774435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.774481] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:06.318 [2024-12-06 15:49:54.775372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.775442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:06.318 [2024-12-06 15:49:54.775457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:23:06.318 [2024-12-06 15:49:54.775472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.777546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.777722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:06.318 [2024-12-06 15:49:54.777747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:23:06.318 [2024-12-06 15:49:54.777776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.955218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.955281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:06.318 [2024-12-06 15:49:54.955303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.415 ms 00:23:06.318 [2024-12-06 15:49:54.955317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.960617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.960806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:06.318 [2024-12-06 15:49:54.960847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.263 ms 00:23:06.318 [2024-12-06 15:49:54.960872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.962234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.962309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:06.318 [2024-12-06 15:49:54.962324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:23:06.318 [2024-12-06 15:49:54.962337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.968186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.968286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:06.318 [2024-12-06 15:49:54.968318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.790 ms 00:23:06.318 [2024-12-06 15:49:54.968347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.968576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.968614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:06.318 [2024-12-06 15:49:54.968637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:23:06.318 [2024-12-06 15:49:54.968659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.971159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.971420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:06.318 [2024-12-06 15:49:54.971461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:23:06.318 [2024-12-06 15:49:54.971486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.973526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.973584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:06.318 [2024-12-06 15:49:54.973604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:23:06.318 [2024-12-06 15:49:54.973622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.975061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.975113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.318 [2024-12-06 15:49:54.975133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:23:06.318 [2024-12-06 15:49:54.975154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.976463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.318 [2024-12-06 15:49:54.976518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.318 [2024-12-06 15:49:54.976538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:23:06.318 [2024-12-06 15:49:54.976556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.318 [2024-12-06 15:49:54.976604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.318 [2024-12-06 15:49:54.976652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:06.318 [2024-12-06 15:49:54.976672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.976984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.977983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.319 [2024-12-06 15:49:54.978319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.320 [2024-12-06 15:49:54.978509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.320 [2024-12-06 15:49:54.978524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9815a5d-2076-4f8d-99a7-79476d971c49 00:23:06.320 [2024-12-06 15:49:54.978567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:06.320 [2024-12-06 15:49:54.978582] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:06.320 [2024-12-06 15:49:54.978599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:06.320 [2024-12-06 15:49:54.978615] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:06.320 [2024-12-06 15:49:54.978636] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.320 [2024-12-06 15:49:54.978651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.320 [2024-12-06 15:49:54.978679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.320 [2024-12-06 15:49:54.978692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.320 [2024-12-06 15:49:54.978709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.320 [2024-12-06 15:49:54.978724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.320 [2024-12-06 15:49:54.978747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.320 [2024-12-06 15:49:54.978764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:23:06.320 [2024-12-06 15:49:54.978786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.981862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.320 [2024-12-06 15:49:54.982064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.320 [2024-12-06 15:49:54.982219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.044 ms 00:23:06.320 [2024-12-06 15:49:54.982375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.982618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.320 [2024-12-06 15:49:54.982701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.320 [2024-12-06 15:49:54.982829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:06.320 [2024-12-06 15:49:54.983015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.993093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.320 [2024-12-06 15:49:54.993303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.320 [2024-12-06 15:49:54.993684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.320 [2024-12-06 15:49:54.993845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.994028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.320 [2024-12-06 15:49:54.994119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.320 [2024-12-06 15:49:54.994275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.320 [2024-12-06 15:49:54.994423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.994617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.320 [2024-12-06 15:49:54.994708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.320 [2024-12-06 15:49:54.994861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.320 [2024-12-06 15:49:54.994955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.320 [2024-12-06 15:49:54.995138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.320 [2024-12-06 15:49:54.995325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.320 [2024-12-06 15:49:54.995474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.320 [2024-12-06 15:49:54.995515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.012397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.012691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.579 [2024-12-06 15:49:55.012727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.012749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.027777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.028059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.579 [2024-12-06 15:49:55.028104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.028144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.028277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.028310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.579 [2024-12-06 15:49:55.028328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.028348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.028432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.028486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.579 [2024-12-06 15:49:55.028507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.028530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.028671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.028703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.579 [2024-12-06 15:49:55.028721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.028740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.028804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.028835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.579 [2024-12-06 15:49:55.028852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.028871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.028970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.029003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.579 [2024-12-06 15:49:55.029021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.029040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.029134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.579 [2024-12-06 15:49:55.029164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.579 [2024-12-06 15:49:55.029181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.579 [2024-12-06 15:49:55.029205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.579 [2024-12-06 15:49:55.029429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 255.222 ms, result 0 00:23:06.579 true 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88574 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88574 ']' 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88574 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88574 00:23:06.579 killing process with pid 88574 00:23:06.579 Received shutdown signal, test time was about 4.000000 seconds 00:23:06.579 00:23:06.579 Latency(us) 00:23:06.579 [2024-12-06T15:49:55.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.579 [2024-12-06T15:49:55.272Z] =================================================================================================================== 00:23:06.579 [2024-12-06T15:49:55.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88574' 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88574 00:23:06.579 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88574 00:23:06.837 Remove shared memory files 00:23:06.837 15:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:06.837 15:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:06.837 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:06.837 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:07.095 ************************************ 00:23:07.095 END TEST ftl_bdevperf 00:23:07.095 ************************************ 00:23:07.095 00:23:07.095 real 0m21.639s 00:23:07.095 user 0m25.231s 00:23:07.095 sys 0m1.172s 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.095 15:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:07.095 15:49:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:07.095 15:49:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:07.095 15:49:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.095 15:49:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:07.095 ************************************ 00:23:07.095 START TEST ftl_trim 00:23:07.095 ************************************ 00:23:07.095 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:07.095 * Looking for test storage... 00:23:07.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:07.095 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:07.095 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:07.095 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:07.095 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.095 15:49:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.353 15:49:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.354 15:49:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:07.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.354 --rc genhtml_branch_coverage=1 00:23:07.354 --rc genhtml_function_coverage=1 00:23:07.354 --rc genhtml_legend=1 00:23:07.354 --rc geninfo_all_blocks=1 00:23:07.354 --rc geninfo_unexecuted_blocks=1 00:23:07.354 00:23:07.354 ' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=88909 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 88909 00:23:07.354 15:49:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88909 ']' 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.354 15:49:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:07.354 [2024-12-06 15:49:55.961044] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:07.354 [2024-12-06 15:49:55.961539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88909 ] 00:23:07.613 [2024-12-06 15:49:56.116795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.613 [2024-12-06 15:49:56.158042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.613 [2024-12-06 15:49:56.158095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.613 [2024-12-06 15:49:56.158169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.549 15:49:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.549 15:49:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:08.549 15:49:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:08.808 15:49:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:08.808 15:49:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:08.808 15:49:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:08.808 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:08.808 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:08.808 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:08.808 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:08.808 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:09.066 { 00:23:09.066 "name": "nvme0n1", 00:23:09.066 "aliases": [ 00:23:09.066 "fd5a74c2-a580-4862-a945-94297d02e5c7" 00:23:09.066 ], 00:23:09.066 "product_name": "NVMe disk", 00:23:09.066 "block_size": 4096, 00:23:09.066 "num_blocks": 1310720, 00:23:09.066 "uuid": "fd5a74c2-a580-4862-a945-94297d02e5c7", 00:23:09.066 "numa_id": -1, 00:23:09.066 "assigned_rate_limits": { 00:23:09.066 "rw_ios_per_sec": 0, 00:23:09.066 "rw_mbytes_per_sec": 0, 00:23:09.066 "r_mbytes_per_sec": 0, 00:23:09.066 "w_mbytes_per_sec": 0 00:23:09.066 }, 00:23:09.066 "claimed": true, 00:23:09.066 "claim_type": "read_many_write_one", 00:23:09.066 "zoned": false, 00:23:09.066 "supported_io_types": { 00:23:09.066 "read": true, 00:23:09.066 "write": true, 00:23:09.066 "unmap": true, 00:23:09.066 "flush": true, 00:23:09.066 "reset": true, 00:23:09.066 "nvme_admin": true, 00:23:09.066 "nvme_io": true, 00:23:09.066 "nvme_io_md": false, 00:23:09.066 "write_zeroes": true, 00:23:09.066 "zcopy": false, 00:23:09.066 "get_zone_info": false, 00:23:09.066 "zone_management": false, 00:23:09.066 "zone_append": false, 00:23:09.066 "compare": true, 00:23:09.066 "compare_and_write": false, 00:23:09.066 "abort": true, 00:23:09.066 "seek_hole": false, 00:23:09.066 "seek_data": false, 00:23:09.066 "copy": true, 00:23:09.066 "nvme_iov_md": false 00:23:09.066 }, 00:23:09.066 "driver_specific": { 00:23:09.066 "nvme": [ 00:23:09.066 { 00:23:09.066 "pci_address": "0000:00:11.0", 00:23:09.066 "trid": { 00:23:09.066 "trtype": "PCIe", 00:23:09.066 "traddr": "0000:00:11.0" 00:23:09.066 }, 00:23:09.066 "ctrlr_data": { 00:23:09.066 "cntlid": 0, 00:23:09.066 "vendor_id": "0x1b36", 00:23:09.066 "model_number": "QEMU NVMe Ctrl", 00:23:09.066 "serial_number": "12341", 00:23:09.066 "firmware_revision": "8.0.0", 00:23:09.066 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:09.066 "oacs": { 00:23:09.066 "security": 0, 00:23:09.066 "format": 1, 00:23:09.066 "firmware": 0, 00:23:09.066 "ns_manage": 1 00:23:09.066 }, 00:23:09.066 "multi_ctrlr": false, 00:23:09.066 "ana_reporting": false 00:23:09.066 }, 00:23:09.066 "vs": { 00:23:09.066 "nvme_version": "1.4" 00:23:09.066 }, 00:23:09.066 "ns_data": { 00:23:09.066 "id": 1, 00:23:09.066 "can_share": false 00:23:09.066 } 00:23:09.066 } 00:23:09.066 ], 00:23:09.066 "mp_policy": "active_passive" 00:23:09.066 } 00:23:09.066 } 00:23:09.066 ]' 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:09.066 15:49:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:09.066 15:49:57 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:09.066 15:49:57 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:09.066 15:49:57 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:09.066 15:49:57 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:09.066 15:49:57 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:09.324 15:49:57 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ab6b9624-0a6a-4c40-973c-ae207c035d1c 00:23:09.324 15:49:57 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:09.324 15:49:57 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab6b9624-0a6a-4c40-973c-ae207c035d1c 00:23:09.582 15:49:58 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:09.840 15:49:58 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=53fd3d8d-9a85-4805-b1d2-fc12dcce8123 00:23:09.840 15:49:58 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53fd3d8d-9a85-4805-b1d2-fc12dcce8123 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:10.100 15:49:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.100 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.100 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:10.100 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:10.100 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:10.100 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.358 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:10.358 { 00:23:10.358 "name": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:10.358 "aliases": [ 00:23:10.358 "lvs/nvme0n1p0" 00:23:10.358 ], 00:23:10.358 "product_name": "Logical Volume", 00:23:10.358 "block_size": 4096, 00:23:10.358 "num_blocks": 26476544, 00:23:10.358 "uuid": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:10.358 "assigned_rate_limits": { 00:23:10.358 "rw_ios_per_sec": 0, 00:23:10.358 "rw_mbytes_per_sec": 0, 00:23:10.358 "r_mbytes_per_sec": 0, 00:23:10.358 "w_mbytes_per_sec": 0 00:23:10.358 }, 00:23:10.358 "claimed": false, 00:23:10.358 "zoned": false, 00:23:10.358 "supported_io_types": { 00:23:10.358 "read": true, 00:23:10.358 "write": true, 00:23:10.358 "unmap": true, 00:23:10.358 "flush": false, 00:23:10.358 "reset": true, 00:23:10.358 "nvme_admin": false, 00:23:10.358 "nvme_io": false, 00:23:10.358 "nvme_io_md": false, 00:23:10.358 "write_zeroes": true, 00:23:10.358 "zcopy": false, 00:23:10.358 "get_zone_info": false, 00:23:10.358 "zone_management": false, 00:23:10.358 "zone_append": false, 00:23:10.358 "compare": false, 00:23:10.358 "compare_and_write": false, 00:23:10.358 "abort": false, 00:23:10.358 "seek_hole": true, 00:23:10.358 "seek_data": true, 00:23:10.358 "copy": false, 00:23:10.358 "nvme_iov_md": false 00:23:10.358 }, 00:23:10.358 "driver_specific": { 00:23:10.358 "lvol": { 00:23:10.358 "lvol_store_uuid": "53fd3d8d-9a85-4805-b1d2-fc12dcce8123", 00:23:10.358 "base_bdev": "nvme0n1", 00:23:10.358 "thin_provision": true, 00:23:10.358 "num_allocated_clusters": 0, 00:23:10.358 "snapshot": false, 00:23:10.358 "clone": false, 00:23:10.358 "esnap_clone": false 00:23:10.358 } 00:23:10.358 } 00:23:10.359 } 00:23:10.359 ]' 00:23:10.359 15:49:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:10.359 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:10.359 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:10.616 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:10.616 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:10.616 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:10.616 15:49:59 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:10.616 15:49:59 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:10.616 15:49:59 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:10.874 15:49:59 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:10.875 15:49:59 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:10.875 15:49:59 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.875 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:10.875 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:10.875 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:10.875 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:10.875 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:11.133 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:11.133 { 00:23:11.133 "name": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:11.133 "aliases": [ 00:23:11.133 "lvs/nvme0n1p0" 00:23:11.133 ], 00:23:11.133 "product_name": "Logical Volume", 00:23:11.133 "block_size": 4096, 00:23:11.133 "num_blocks": 26476544, 00:23:11.133 "uuid": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:11.133 "assigned_rate_limits": { 00:23:11.133 "rw_ios_per_sec": 0, 00:23:11.133 "rw_mbytes_per_sec": 0, 00:23:11.133 "r_mbytes_per_sec": 0, 00:23:11.133 "w_mbytes_per_sec": 0 00:23:11.133 }, 00:23:11.133 "claimed": false, 00:23:11.133 "zoned": false, 00:23:11.133 "supported_io_types": { 00:23:11.133 "read": true, 00:23:11.133 "write": true, 00:23:11.133 "unmap": true, 00:23:11.133 "flush": false, 00:23:11.133 "reset": true, 00:23:11.133 "nvme_admin": false, 00:23:11.133 "nvme_io": false, 00:23:11.133 "nvme_io_md": false, 00:23:11.133 "write_zeroes": true, 00:23:11.133 "zcopy": false, 00:23:11.133 "get_zone_info": false, 00:23:11.133 "zone_management": false, 00:23:11.133 "zone_append": false, 00:23:11.133 "compare": false, 00:23:11.133 "compare_and_write": false, 00:23:11.133 "abort": false, 00:23:11.133 "seek_hole": true, 00:23:11.133 "seek_data": true, 00:23:11.133 "copy": false, 00:23:11.133 "nvme_iov_md": false 00:23:11.133 }, 00:23:11.133 "driver_specific": { 00:23:11.133 "lvol": { 00:23:11.133 "lvol_store_uuid": "53fd3d8d-9a85-4805-b1d2-fc12dcce8123", 00:23:11.133 "base_bdev": "nvme0n1", 00:23:11.133 "thin_provision": true, 00:23:11.133 "num_allocated_clusters": 0, 00:23:11.133 "snapshot": false, 00:23:11.133 "clone": false, 00:23:11.133 "esnap_clone": false 00:23:11.133 } 00:23:11.133 } 00:23:11.133 } 00:23:11.133 ]' 00:23:11.133 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:11.134 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:11.134 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:11.134 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:11.134 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:11.134 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:11.134 15:49:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:11.134 15:49:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:11.392 15:49:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:11.392 15:49:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:11.392 15:49:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:11.392 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:11.392 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:11.392 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:11.392 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:11.392 15:49:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0ca69f9-9705-4b64-9d84-498cebc87870 00:23:11.650 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:11.650 { 00:23:11.650 "name": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:11.650 "aliases": [ 00:23:11.650 "lvs/nvme0n1p0" 00:23:11.650 ], 00:23:11.650 "product_name": "Logical Volume", 00:23:11.650 "block_size": 4096, 00:23:11.650 "num_blocks": 26476544, 00:23:11.650 "uuid": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:11.650 "assigned_rate_limits": { 00:23:11.650 "rw_ios_per_sec": 0, 00:23:11.650 "rw_mbytes_per_sec": 0, 00:23:11.650 "r_mbytes_per_sec": 0, 00:23:11.650 "w_mbytes_per_sec": 0 00:23:11.650 }, 00:23:11.650 "claimed": false, 00:23:11.650 "zoned": false, 00:23:11.650 "supported_io_types": { 00:23:11.650 "read": true, 00:23:11.650 "write": true, 00:23:11.650 "unmap": true, 00:23:11.650 "flush": false, 00:23:11.650 "reset": true, 00:23:11.650 "nvme_admin": false, 00:23:11.650 "nvme_io": false, 00:23:11.650 "nvme_io_md": false, 00:23:11.650 "write_zeroes": true, 00:23:11.650 "zcopy": false, 00:23:11.650 "get_zone_info": false, 00:23:11.650 "zone_management": false, 00:23:11.650 "zone_append": false, 00:23:11.650 "compare": false, 00:23:11.650 "compare_and_write": false, 00:23:11.650 "abort": false, 00:23:11.650 "seek_hole": true, 00:23:11.650 "seek_data": true, 00:23:11.650 "copy": false, 00:23:11.650 "nvme_iov_md": false 00:23:11.650 }, 00:23:11.650 "driver_specific": { 00:23:11.650 "lvol": { 00:23:11.650 "lvol_store_uuid": "53fd3d8d-9a85-4805-b1d2-fc12dcce8123", 00:23:11.650 "base_bdev": "nvme0n1", 00:23:11.650 "thin_provision": true, 00:23:11.650 "num_allocated_clusters": 0, 00:23:11.650 "snapshot": false, 00:23:11.650 "clone": false, 00:23:11.650 "esnap_clone": false 00:23:11.650 } 00:23:11.650 } 00:23:11.650 } 00:23:11.651 ]' 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:11.651 15:50:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:11.651 15:50:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:11.651 15:50:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d0ca69f9-9705-4b64-9d84-498cebc87870 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:11.909 [2024-12-06 15:50:00.534014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.909 [2024-12-06 15:50:00.534069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:11.909 [2024-12-06 15:50:00.534088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:11.909 [2024-12-06 15:50:00.534103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.909 [2024-12-06 15:50:00.537222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.909 [2024-12-06 15:50:00.537268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.909 [2024-12-06 15:50:00.537286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:23:11.909 [2024-12-06 15:50:00.537300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.909 [2024-12-06 15:50:00.537489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:11.910 [2024-12-06 15:50:00.537879] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:11.910 [2024-12-06 15:50:00.537922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.537955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.910 [2024-12-06 15:50:00.537975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:23:11.910 [2024-12-06 15:50:00.537988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.538267] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 847b1325-e058-4168-96b2-9a15170e279e 00:23:11.910 [2024-12-06 15:50:00.540790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.541034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:11.910 [2024-12-06 15:50:00.541082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:11.910 [2024-12-06 15:50:00.541096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.552207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.552438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.910 [2024-12-06 15:50:00.552517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.983 ms 00:23:11.910 [2024-12-06 15:50:00.552532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.552762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.552812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.910 [2024-12-06 15:50:00.552836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:11.910 [2024-12-06 15:50:00.552856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.553016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.553040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:11.910 [2024-12-06 15:50:00.553059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:11.910 [2024-12-06 15:50:00.553075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.553142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:11.910 [2024-12-06 15:50:00.555969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.556024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.910 [2024-12-06 15:50:00.556040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:23:11.910 [2024-12-06 15:50:00.556057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.556119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.556138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:11.910 [2024-12-06 15:50:00.556150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:11.910 [2024-12-06 15:50:00.556165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.556212] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:11.910 [2024-12-06 15:50:00.556388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:11.910 [2024-12-06 15:50:00.556409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:11.910 [2024-12-06 15:50:00.556426] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:11.910 [2024-12-06 15:50:00.556440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:11.910 [2024-12-06 15:50:00.556498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:11.910 [2024-12-06 15:50:00.556510] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:11.910 [2024-12-06 15:50:00.556523] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:11.910 [2024-12-06 15:50:00.556534] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:11.910 [2024-12-06 15:50:00.556547] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:11.910 [2024-12-06 15:50:00.556563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.556577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:11.910 [2024-12-06 15:50:00.556607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:23:11.910 [2024-12-06 15:50:00.556622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.556800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.910 [2024-12-06 15:50:00.556840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:11.910 [2024-12-06 15:50:00.556865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:11.910 [2024-12-06 15:50:00.556894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.910 [2024-12-06 15:50:00.557076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:11.910 [2024-12-06 15:50:00.557105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:11.910 [2024-12-06 15:50:00.557116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:11.910 [2024-12-06 15:50:00.557154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:11.910 [2024-12-06 15:50:00.557184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.910 [2024-12-06 15:50:00.557205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:11.910 [2024-12-06 15:50:00.557217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:11.910 [2024-12-06 15:50:00.557226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.910 [2024-12-06 15:50:00.557240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:11.910 [2024-12-06 15:50:00.557250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:11.910 [2024-12-06 15:50:00.557293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:11.910 [2024-12-06 15:50:00.557314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:11.910 [2024-12-06 15:50:00.557346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:11.910 [2024-12-06 15:50:00.557403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:11.910 [2024-12-06 15:50:00.557440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:11.910 [2024-12-06 15:50:00.557479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:11.910 [2024-12-06 15:50:00.557513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.910 [2024-12-06 15:50:00.557535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:11.910 [2024-12-06 15:50:00.557548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:11.910 [2024-12-06 15:50:00.557558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.910 [2024-12-06 15:50:00.557571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:11.910 [2024-12-06 15:50:00.557581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:11.910 [2024-12-06 15:50:00.557593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:11.910 [2024-12-06 15:50:00.557616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:11.910 [2024-12-06 15:50:00.557627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557639] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:11.910 [2024-12-06 15:50:00.557650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:11.910 [2024-12-06 15:50:00.557682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.910 [2024-12-06 15:50:00.557706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:11.910 [2024-12-06 15:50:00.557716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:11.910 [2024-12-06 15:50:00.557728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:11.910 [2024-12-06 15:50:00.557754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:11.910 [2024-12-06 15:50:00.557768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:11.910 [2024-12-06 15:50:00.557779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:11.910 [2024-12-06 15:50:00.557793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:11.911 [2024-12-06 15:50:00.557807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.557821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:11.911 [2024-12-06 15:50:00.557833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:11.911 [2024-12-06 15:50:00.557848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:11.911 [2024-12-06 15:50:00.557859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:11.911 [2024-12-06 15:50:00.557873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:11.911 [2024-12-06 15:50:00.557884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:11.911 [2024-12-06 15:50:00.557900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:11.911 [2024-12-06 15:50:00.557911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:11.911 [2024-12-06 15:50:00.557925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:11.911 [2024-12-06 15:50:00.557936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.557949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.557961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.557987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.558000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:11.911 [2024-12-06 15:50:00.558013] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:11.911 [2024-12-06 15:50:00.558025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.558044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:11.911 [2024-12-06 15:50:00.558055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:11.911 [2024-12-06 15:50:00.558068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:11.911 [2024-12-06 15:50:00.558079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:11.911 [2024-12-06 15:50:00.558094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.911 [2024-12-06 15:50:00.558106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:11.911 [2024-12-06 15:50:00.558125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:23:11.911 [2024-12-06 15:50:00.558153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.911 [2024-12-06 15:50:00.558286] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:11.911 [2024-12-06 15:50:00.558302] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:16.151 [2024-12-06 15:50:04.261272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.261366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:16.151 [2024-12-06 15:50:04.261408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3702.999 ms 00:23:16.151 [2024-12-06 15:50:04.261425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.278848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.278902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.151 [2024-12-06 15:50:04.278942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:23:16.151 [2024-12-06 15:50:04.278985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.279265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.279284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:16.151 [2024-12-06 15:50:04.279301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:16.151 [2024-12-06 15:50:04.279367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.313031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.313247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.151 [2024-12-06 15:50:04.313284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.602 ms 00:23:16.151 [2024-12-06 15:50:04.313298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.313446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.313467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.151 [2024-12-06 15:50:04.313488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:16.151 [2024-12-06 15:50:04.313500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.314171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.314191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.151 [2024-12-06 15:50:04.314207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:23:16.151 [2024-12-06 15:50:04.314219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.314447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.314471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.151 [2024-12-06 15:50:04.314488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:23:16.151 [2024-12-06 15:50:04.314502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.151 [2024-12-06 15:50:04.323891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.151 [2024-12-06 15:50:04.323930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.152 [2024-12-06 15:50:04.323990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.320 ms 00:23:16.152 [2024-12-06 15:50:04.324002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.332931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:16.152 [2024-12-06 15:50:04.355600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.355913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:16.152 [2024-12-06 15:50:04.355948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.406 ms 00:23:16.152 [2024-12-06 15:50:04.355980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.446664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.446755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:16.152 [2024-12-06 15:50:04.446778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.520 ms 00:23:16.152 [2024-12-06 15:50:04.446795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.447082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.447108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.152 [2024-12-06 15:50:04.447121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:23:16.152 [2024-12-06 15:50:04.447135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.451256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.451301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:16.152 [2024-12-06 15:50:04.451318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:23:16.152 [2024-12-06 15:50:04.451332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.454875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.454919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:16.152 [2024-12-06 15:50:04.454946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:23:16.152 [2024-12-06 15:50:04.454961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.455321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.455341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.152 [2024-12-06 15:50:04.455353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:16.152 [2024-12-06 15:50:04.455368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.499435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.499635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:16.152 [2024-12-06 15:50:04.499665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.023 ms 00:23:16.152 [2024-12-06 15:50:04.499682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.505736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.505786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:16.152 [2024-12-06 15:50:04.505802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.920 ms 00:23:16.152 [2024-12-06 15:50:04.505816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.509930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.509998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:16.152 [2024-12-06 15:50:04.510014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:23:16.152 [2024-12-06 15:50:04.510027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.514253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.514298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.152 [2024-12-06 15:50:04.514314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.167 ms 00:23:16.152 [2024-12-06 15:50:04.514330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.514400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.514421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.152 [2024-12-06 15:50:04.514432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:16.152 [2024-12-06 15:50:04.514445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.514546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.152 [2024-12-06 15:50:04.514583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.152 [2024-12-06 15:50:04.514595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:16.152 [2024-12-06 15:50:04.514612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.152 [2024-12-06 15:50:04.516158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:16.152 [2024-12-06 15:50:04.517464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3981.676 ms, result 0 00:23:16.152 [2024-12-06 15:50:04.518497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:16.152 { 00:23:16.152 "name": "ftl0", 00:23:16.152 "uuid": "847b1325-e058-4168-96b2-9a15170e279e" 00:23:16.152 } 00:23:16.152 15:50:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:16.152 15:50:04 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:16.411 [ 00:23:16.411 { 00:23:16.411 "name": "ftl0", 00:23:16.411 "aliases": [ 00:23:16.411 "847b1325-e058-4168-96b2-9a15170e279e" 00:23:16.411 ], 00:23:16.411 "product_name": "FTL disk", 00:23:16.411 "block_size": 4096, 00:23:16.411 "num_blocks": 23592960, 00:23:16.411 "uuid": "847b1325-e058-4168-96b2-9a15170e279e", 00:23:16.411 "assigned_rate_limits": { 00:23:16.411 "rw_ios_per_sec": 0, 00:23:16.412 "rw_mbytes_per_sec": 0, 00:23:16.412 "r_mbytes_per_sec": 0, 00:23:16.412 "w_mbytes_per_sec": 0 00:23:16.412 }, 00:23:16.412 "claimed": false, 00:23:16.412 "zoned": false, 00:23:16.412 "supported_io_types": { 00:23:16.412 "read": true, 00:23:16.412 "write": true, 00:23:16.412 "unmap": true, 00:23:16.412 "flush": true, 00:23:16.412 "reset": false, 00:23:16.412 "nvme_admin": false, 00:23:16.412 "nvme_io": false, 00:23:16.412 "nvme_io_md": false, 00:23:16.412 "write_zeroes": true, 00:23:16.412 "zcopy": false, 00:23:16.412 "get_zone_info": false, 00:23:16.412 "zone_management": false, 00:23:16.412 "zone_append": false, 00:23:16.412 "compare": false, 00:23:16.412 "compare_and_write": false, 00:23:16.412 "abort": false, 00:23:16.412 "seek_hole": false, 00:23:16.412 "seek_data": false, 00:23:16.412 "copy": false, 00:23:16.412 "nvme_iov_md": false 00:23:16.412 }, 00:23:16.412 "driver_specific": { 00:23:16.412 "ftl": { 00:23:16.412 "base_bdev": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:16.412 "cache": "nvc0n1p0" 00:23:16.412 } 00:23:16.412 } 00:23:16.412 } 00:23:16.412 ] 00:23:16.412 15:50:05 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:16.412 15:50:05 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:16.412 15:50:05 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:16.977 15:50:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:16.977 15:50:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:16.977 15:50:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:16.977 { 00:23:16.977 "name": "ftl0", 00:23:16.977 "aliases": [ 00:23:16.977 "847b1325-e058-4168-96b2-9a15170e279e" 00:23:16.977 ], 00:23:16.977 "product_name": "FTL disk", 00:23:16.977 "block_size": 4096, 00:23:16.977 "num_blocks": 23592960, 00:23:16.977 "uuid": "847b1325-e058-4168-96b2-9a15170e279e", 00:23:16.977 "assigned_rate_limits": { 00:23:16.977 "rw_ios_per_sec": 0, 00:23:16.977 "rw_mbytes_per_sec": 0, 00:23:16.977 "r_mbytes_per_sec": 0, 00:23:16.977 "w_mbytes_per_sec": 0 00:23:16.977 }, 00:23:16.977 "claimed": false, 00:23:16.977 "zoned": false, 00:23:16.977 "supported_io_types": { 00:23:16.977 "read": true, 00:23:16.977 "write": true, 00:23:16.977 "unmap": true, 00:23:16.977 "flush": true, 00:23:16.977 "reset": false, 00:23:16.977 "nvme_admin": false, 00:23:16.978 "nvme_io": false, 00:23:16.978 "nvme_io_md": false, 00:23:16.978 "write_zeroes": true, 00:23:16.978 "zcopy": false, 00:23:16.978 "get_zone_info": false, 00:23:16.978 "zone_management": false, 00:23:16.978 "zone_append": false, 00:23:16.978 "compare": false, 00:23:16.978 "compare_and_write": false, 00:23:16.978 "abort": false, 00:23:16.978 "seek_hole": false, 00:23:16.978 "seek_data": false, 00:23:16.978 "copy": false, 00:23:16.978 "nvme_iov_md": false 00:23:16.978 }, 00:23:16.978 "driver_specific": { 00:23:16.978 "ftl": { 00:23:16.978 "base_bdev": "d0ca69f9-9705-4b64-9d84-498cebc87870", 00:23:16.978 "cache": "nvc0n1p0" 00:23:16.978 } 00:23:16.978 } 00:23:16.978 } 00:23:16.978 ]' 00:23:16.978 15:50:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:17.236 15:50:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:17.236 15:50:05 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:17.236 [2024-12-06 15:50:05.903778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.903848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:17.236 [2024-12-06 15:50:05.903875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:17.236 [2024-12-06 15:50:05.903887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.904050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:17.236 [2024-12-06 15:50:05.905443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.905483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:17.236 [2024-12-06 15:50:05.905515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:23:17.236 [2024-12-06 15:50:05.905530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.906372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.906415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:17.236 [2024-12-06 15:50:05.906431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:23:17.236 [2024-12-06 15:50:05.906444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.909658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.909696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:17.236 [2024-12-06 15:50:05.909710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:23:17.236 [2024-12-06 15:50:05.909730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.915682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.915720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:17.236 [2024-12-06 15:50:05.915737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.865 ms 00:23:17.236 [2024-12-06 15:50:05.915753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.917696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.917759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:17.236 [2024-12-06 15:50:05.917775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:23:17.236 [2024-12-06 15:50:05.917788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.923210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.923290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:17.236 [2024-12-06 15:50:05.923307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.364 ms 00:23:17.236 [2024-12-06 15:50:05.923322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.923570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.923593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:17.236 [2024-12-06 15:50:05.923605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:23:17.236 [2024-12-06 15:50:05.923618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.236 [2024-12-06 15:50:05.926350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.236 [2024-12-06 15:50:05.926420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:17.236 [2024-12-06 15:50:05.926435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:23:17.236 [2024-12-06 15:50:05.926465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.496 [2024-12-06 15:50:05.928547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.496 [2024-12-06 15:50:05.928751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:17.496 [2024-12-06 15:50:05.928792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:23:17.496 [2024-12-06 15:50:05.928822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.496 [2024-12-06 15:50:05.930426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.496 [2024-12-06 15:50:05.930485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:17.496 [2024-12-06 15:50:05.930501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:23:17.496 [2024-12-06 15:50:05.930514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.496 [2024-12-06 15:50:05.931839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.496 [2024-12-06 15:50:05.932043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:17.496 [2024-12-06 15:50:05.932067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:23:17.496 [2024-12-06 15:50:05.932089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.496 [2024-12-06 15:50:05.932203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:17.496 [2024-12-06 15:50:05.932239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:17.496 [2024-12-06 15:50:05.932365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.932991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:17.497 [2024-12-06 15:50:05.933628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:17.498 [2024-12-06 15:50:05.933639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:17.498 [2024-12-06 15:50:05.933658] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:17.498 [2024-12-06 15:50:05.933669] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:17.498 [2024-12-06 15:50:05.933681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:17.498 [2024-12-06 15:50:05.933691] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:17.498 [2024-12-06 15:50:05.933707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:17.498 [2024-12-06 15:50:05.933718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:17.498 [2024-12-06 15:50:05.933729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:17.498 [2024-12-06 15:50:05.933739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:17.498 [2024-12-06 15:50:05.933751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:17.498 [2024-12-06 15:50:05.933759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:17.498 [2024-12-06 15:50:05.933770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:17.498 [2024-12-06 15:50:05.933780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-06 15:50:05.933793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:17.498 [2024-12-06 15:50:05.933803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:23:17.498 [2024-12-06 15:50:05.933818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.936273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-06 15:50:05.936309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:17.498 [2024-12-06 15:50:05.936323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:23:17.498 [2024-12-06 15:50:05.936336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.936594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.498 [2024-12-06 15:50:05.936621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:17.498 [2024-12-06 15:50:05.936636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:23:17.498 [2024-12-06 15:50:05.936651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.945015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.945058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:17.498 [2024-12-06 15:50:05.945073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.945091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.945213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.945241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:17.498 [2024-12-06 15:50:05.945254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.945270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.945362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.945388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:17.498 [2024-12-06 15:50:05.945399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.945412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.945470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.945490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:17.498 [2024-12-06 15:50:05.945505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.945522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.961425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.961494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:17.498 [2024-12-06 15:50:05.961511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.961525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.974753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.974808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:17.498 [2024-12-06 15:50:05.974825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.974841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.974994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.498 [2024-12-06 15:50:05.975036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.975049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.975169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.498 [2024-12-06 15:50:05.975197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.975209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.975388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.498 [2024-12-06 15:50:05.975433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.975451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.975565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:17.498 [2024-12-06 15:50:05.975605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.975636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.975729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.498 [2024-12-06 15:50:05.975787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.975805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.975927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:17.498 [2024-12-06 15:50:05.975984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.498 [2024-12-06 15:50:05.975998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:17.498 [2024-12-06 15:50:05.976012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.498 [2024-12-06 15:50:05.976411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.571 ms, result 0 00:23:17.498 true 00:23:17.498 15:50:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 88909 00:23:17.498 15:50:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88909 ']' 00:23:17.498 15:50:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88909 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88909 00:23:17.498 killing process with pid 88909 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88909' 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88909 00:23:17.498 15:50:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88909 00:23:20.785 15:50:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:21.720 65536+0 records in 00:23:21.720 65536+0 records out 00:23:21.720 268435456 bytes (268 MB, 256 MiB) copied, 1.05396 s, 255 MB/s 00:23:21.720 15:50:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:21.720 [2024-12-06 15:50:10.348492] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:21.720 [2024-12-06 15:50:10.348707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89097 ] 00:23:21.979 [2024-12-06 15:50:10.513445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.979 [2024-12-06 15:50:10.566983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.239 [2024-12-06 15:50:10.733571] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.239 [2024-12-06 15:50:10.733674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.239 [2024-12-06 15:50:10.894558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.894609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.239 [2024-12-06 15:50:10.894630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.239 [2024-12-06 15:50:10.894642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.897275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.897321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.239 [2024-12-06 15:50:10.897338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:23:22.239 [2024-12-06 15:50:10.897350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.897469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.239 [2024-12-06 15:50:10.897802] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.239 [2024-12-06 15:50:10.897835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.897848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.239 [2024-12-06 15:50:10.897860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:23:22.239 [2024-12-06 15:50:10.897870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.900700] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:22.239 [2024-12-06 15:50:10.904264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.904304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:22.239 [2024-12-06 15:50:10.904337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:23:22.239 [2024-12-06 15:50:10.904349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.904425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.904443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:22.239 [2024-12-06 15:50:10.904480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:22.239 [2024-12-06 15:50:10.904493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.914763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.915015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.239 [2024-12-06 15:50:10.915047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.212 ms 00:23:22.239 [2024-12-06 15:50:10.915061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.239 [2024-12-06 15:50:10.915249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.239 [2024-12-06 15:50:10.915273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.239 [2024-12-06 15:50:10.915288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:22.240 [2024-12-06 15:50:10.915305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.915393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.240 [2024-12-06 15:50:10.915408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.240 [2024-12-06 15:50:10.915420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:22.240 [2024-12-06 15:50:10.915431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.915479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:22.240 [2024-12-06 15:50:10.918511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.240 [2024-12-06 15:50:10.918546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.240 [2024-12-06 15:50:10.918600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:23:22.240 [2024-12-06 15:50:10.918611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.918661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.240 [2024-12-06 15:50:10.918683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.240 [2024-12-06 15:50:10.918696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:22.240 [2024-12-06 15:50:10.918706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.918750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:22.240 [2024-12-06 15:50:10.918791] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:22.240 [2024-12-06 15:50:10.918851] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:22.240 [2024-12-06 15:50:10.918933] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:22.240 [2024-12-06 15:50:10.919119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.240 [2024-12-06 15:50:10.919153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.240 [2024-12-06 15:50:10.919170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:22.240 [2024-12-06 15:50:10.919187] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:22.240 [2024-12-06 15:50:10.919228] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.240 [2024-12-06 15:50:10.919240] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.240 [2024-12-06 15:50:10.919260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.240 [2024-12-06 15:50:10.919288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.240 [2024-12-06 15:50:10.919301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.240 [2024-12-06 15:50:10.919315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:23:22.240 [2024-12-06 15:50:10.919342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.919478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.240 [2024-12-06 15:50:10.919508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.240 [2024-12-06 15:50:10.919521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:22.240 [2024-12-06 15:50:10.919547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.240 [2024-12-06 15:50:10.919681] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.240 [2024-12-06 15:50:10.919700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.240 [2024-12-06 15:50:10.919713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.240 [2024-12-06 15:50:10.919748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.240 [2024-12-06 15:50:10.919801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.240 [2024-12-06 15:50:10.919836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.240 [2024-12-06 15:50:10.919846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:22.240 [2024-12-06 15:50:10.919856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.240 [2024-12-06 15:50:10.919866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.240 [2024-12-06 15:50:10.919876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:22.240 [2024-12-06 15:50:10.919887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.240 [2024-12-06 15:50:10.919911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.240 [2024-12-06 15:50:10.919959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:22.240 [2024-12-06 15:50:10.919970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.240 [2024-12-06 15:50:10.919996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.240 [2024-12-06 15:50:10.920030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.240 [2024-12-06 15:50:10.920055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.240 [2024-12-06 15:50:10.920089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.240 [2024-12-06 15:50:10.920116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.240 [2024-12-06 15:50:10.920128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.240 [2024-12-06 15:50:10.920167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.240 [2024-12-06 15:50:10.920178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.240 [2024-12-06 15:50:10.920202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.240 [2024-12-06 15:50:10.920214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:22.240 [2024-12-06 15:50:10.920225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.240 [2024-12-06 15:50:10.920237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.240 [2024-12-06 15:50:10.920248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:22.240 [2024-12-06 15:50:10.920264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.240 [2024-12-06 15:50:10.920287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:22.240 [2024-12-06 15:50:10.920298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.240 [2024-12-06 15:50:10.920321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.240 [2024-12-06 15:50:10.920344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.240 [2024-12-06 15:50:10.920355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.240 [2024-12-06 15:50:10.920367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.240 [2024-12-06 15:50:10.920379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.240 [2024-12-06 15:50:10.920391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.240 [2024-12-06 15:50:10.920418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.240 [2024-12-06 15:50:10.920480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.240 [2024-12-06 15:50:10.920495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.240 [2024-12-06 15:50:10.920509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.240 [2024-12-06 15:50:10.920524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.240 [2024-12-06 15:50:10.920541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:22.240 [2024-12-06 15:50:10.920554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:22.240 [2024-12-06 15:50:10.920567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:22.240 [2024-12-06 15:50:10.920579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:22.240 [2024-12-06 15:50:10.920591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:22.240 [2024-12-06 15:50:10.920603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:22.240 [2024-12-06 15:50:10.920615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:22.240 [2024-12-06 15:50:10.920640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:22.240 [2024-12-06 15:50:10.920653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:22.240 [2024-12-06 15:50:10.920665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:22.240 [2024-12-06 15:50:10.920677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:22.241 [2024-12-06 15:50:10.920690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:22.241 [2024-12-06 15:50:10.920701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:22.241 [2024-12-06 15:50:10.920714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:22.241 [2024-12-06 15:50:10.920727] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.241 [2024-12-06 15:50:10.920747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.241 [2024-12-06 15:50:10.920763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.241 [2024-12-06 15:50:10.920777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.241 [2024-12-06 15:50:10.920790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.241 [2024-12-06 15:50:10.920802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.241 [2024-12-06 15:50:10.920815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.241 [2024-12-06 15:50:10.920828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.241 [2024-12-06 15:50:10.920841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:23:22.241 [2024-12-06 15:50:10.920853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.943933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.944226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.501 [2024-12-06 15:50:10.944260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.969 ms 00:23:22.501 [2024-12-06 15:50:10.944274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.944548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.944572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.501 [2024-12-06 15:50:10.944601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:22.501 [2024-12-06 15:50:10.944615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.972063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.972119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.501 [2024-12-06 15:50:10.972155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.401 ms 00:23:22.501 [2024-12-06 15:50:10.972167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.972292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.972315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.501 [2024-12-06 15:50:10.972330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:22.501 [2024-12-06 15:50:10.972342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.973293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.973343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.501 [2024-12-06 15:50:10.973373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:23:22.501 [2024-12-06 15:50:10.973385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.973601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.973620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.501 [2024-12-06 15:50:10.973633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:22.501 [2024-12-06 15:50:10.973653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.986255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.986421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.501 [2024-12-06 15:50:10.986477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.557 ms 00:23:22.501 [2024-12-06 15:50:10.986495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:10.990501] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:22.501 [2024-12-06 15:50:10.990554] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:22.501 [2024-12-06 15:50:10.990574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:10.990586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:22.501 [2024-12-06 15:50:10.990597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:23:22.501 [2024-12-06 15:50:10.990608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.004305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.004504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:22.501 [2024-12-06 15:50:11.004534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.626 ms 00:23:22.501 [2024-12-06 15:50:11.004548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.006661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.006700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:22.501 [2024-12-06 15:50:11.006717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:23:22.501 [2024-12-06 15:50:11.006727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.008571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.008635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:22.501 [2024-12-06 15:50:11.008652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:23:22.501 [2024-12-06 15:50:11.008663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.009064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.009084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:22.501 [2024-12-06 15:50:11.009097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:23:22.501 [2024-12-06 15:50:11.009119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.035995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.036088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:22.501 [2024-12-06 15:50:11.036110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.840 ms 00:23:22.501 [2024-12-06 15:50:11.036123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.043049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:22.501 [2024-12-06 15:50:11.066195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.066254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:22.501 [2024-12-06 15:50:11.066303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.936 ms 00:23:22.501 [2024-12-06 15:50:11.066315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.066452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.066472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:22.501 [2024-12-06 15:50:11.066491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:22.501 [2024-12-06 15:50:11.066504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.066622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.066645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:22.501 [2024-12-06 15:50:11.066658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:23:22.501 [2024-12-06 15:50:11.066669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.066722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.066737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:22.501 [2024-12-06 15:50:11.066759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.501 [2024-12-06 15:50:11.066770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.066827] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:22.501 [2024-12-06 15:50:11.066845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.066856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:22.501 [2024-12-06 15:50:11.066867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:22.501 [2024-12-06 15:50:11.066878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.501 [2024-12-06 15:50:11.072111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.501 [2024-12-06 15:50:11.072162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:22.501 [2024-12-06 15:50:11.072180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.196 ms 00:23:22.502 [2024-12-06 15:50:11.072191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.502 [2024-12-06 15:50:11.072292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.502 [2024-12-06 15:50:11.072310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:22.502 [2024-12-06 15:50:11.072323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:22.502 [2024-12-06 15:50:11.072334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.502 [2024-12-06 15:50:11.074075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.502 [2024-12-06 15:50:11.075324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 179.065 ms, result 0 00:23:22.502 [2024-12-06 15:50:11.076338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:22.502 [2024-12-06 15:50:11.084207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:23.438  [2024-12-06T15:50:13.509Z] Copying: 21/256 [MB] (21 MBps) [2024-12-06T15:50:14.446Z] Copying: 43/256 [MB] (21 MBps) [2024-12-06T15:50:15.384Z] Copying: 64/256 [MB] (21 MBps) [2024-12-06T15:50:16.318Z] Copying: 85/256 [MB] (21 MBps) [2024-12-06T15:50:17.253Z] Copying: 106/256 [MB] (21 MBps) [2024-12-06T15:50:18.191Z] Copying: 128/256 [MB] (21 MBps) [2024-12-06T15:50:19.128Z] Copying: 150/256 [MB] (21 MBps) [2024-12-06T15:50:20.505Z] Copying: 171/256 [MB] (21 MBps) [2024-12-06T15:50:21.441Z] Copying: 192/256 [MB] (21 MBps) [2024-12-06T15:50:22.376Z] Copying: 214/256 [MB] (21 MBps) [2024-12-06T15:50:23.313Z] Copying: 235/256 [MB] (21 MBps) [2024-12-06T15:50:23.313Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 15:50:23.026403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:34.620 [2024-12-06 15:50:23.028693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.620 [2024-12-06 15:50:23.028739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:34.620 [2024-12-06 15:50:23.028764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.620 [2024-12-06 15:50:23.028792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.620 [2024-12-06 15:50:23.028850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:34.620 [2024-12-06 15:50:23.030216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.620 [2024-12-06 15:50:23.030416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:34.620 [2024-12-06 15:50:23.030545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:23:34.620 [2024-12-06 15:50:23.030568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.620 [2024-12-06 15:50:23.032647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.620 [2024-12-06 15:50:23.032892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:34.620 [2024-12-06 15:50:23.032928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:23:34.620 [2024-12-06 15:50:23.032942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.039430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.039623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:34.621 [2024-12-06 15:50:23.039651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.444 ms 00:23:34.621 [2024-12-06 15:50:23.039677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.045636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.045672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:34.621 [2024-12-06 15:50:23.045687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:23:34.621 [2024-12-06 15:50:23.045704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.047109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.047291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:34.621 [2024-12-06 15:50:23.047407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:23:34.621 [2024-12-06 15:50:23.047453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.051443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.051495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:34.621 [2024-12-06 15:50:23.051510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.936 ms 00:23:34.621 [2024-12-06 15:50:23.051521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.051634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.051652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:34.621 [2024-12-06 15:50:23.051664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:34.621 [2024-12-06 15:50:23.051680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.054077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.054112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:34.621 [2024-12-06 15:50:23.054125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:23:34.621 [2024-12-06 15:50:23.054135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.055827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.055863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:34.621 [2024-12-06 15:50:23.055893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:23:34.621 [2024-12-06 15:50:23.055902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.057260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.057297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:34.621 [2024-12-06 15:50:23.057313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:23:34.621 [2024-12-06 15:50:23.057322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.058661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.621 [2024-12-06 15:50:23.058697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:34.621 [2024-12-06 15:50:23.058710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:23:34.621 [2024-12-06 15:50:23.058719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.621 [2024-12-06 15:50:23.058754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:34.621 [2024-12-06 15:50:23.058776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.058991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:34.621 [2024-12-06 15:50:23.059369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:34.622 [2024-12-06 15:50:23.059843] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:34.622 [2024-12-06 15:50:23.059854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:34.622 [2024-12-06 15:50:23.059864] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:34.622 [2024-12-06 15:50:23.059874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:34.622 [2024-12-06 15:50:23.059883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:34.622 [2024-12-06 15:50:23.059893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:34.622 [2024-12-06 15:50:23.059903] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:34.622 [2024-12-06 15:50:23.059913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:34.622 [2024-12-06 15:50:23.060291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:34.622 [2024-12-06 15:50:23.060343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:34.622 [2024-12-06 15:50:23.060378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:34.622 [2024-12-06 15:50:23.060414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.622 [2024-12-06 15:50:23.060451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:34.622 [2024-12-06 15:50:23.060544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:23:34.622 [2024-12-06 15:50:23.060750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.063595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.622 [2024-12-06 15:50:23.063622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:34.622 [2024-12-06 15:50:23.063639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:23:34.622 [2024-12-06 15:50:23.063650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.063827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.622 [2024-12-06 15:50:23.063841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:34.622 [2024-12-06 15:50:23.063854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:23:34.622 [2024-12-06 15:50:23.063865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.074378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.074556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.622 [2024-12-06 15:50:23.074668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.074724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.074903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.074965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.622 [2024-12-06 15:50:23.074982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.074994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.075055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.075073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.622 [2024-12-06 15:50:23.075085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.075122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.075153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.075165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.622 [2024-12-06 15:50:23.075177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.075187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.092452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.092519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.622 [2024-12-06 15:50:23.092553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.092564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.105803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.106077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.622 [2024-12-06 15:50:23.106194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.106316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.106405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.106426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.622 [2024-12-06 15:50:23.106440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.106453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.106494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.106519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.622 [2024-12-06 15:50:23.106531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.622 [2024-12-06 15:50:23.106543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.622 [2024-12-06 15:50:23.106660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.622 [2024-12-06 15:50:23.106680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.623 [2024-12-06 15:50:23.106693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.623 [2024-12-06 15:50:23.106720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.623 [2024-12-06 15:50:23.106792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.623 [2024-12-06 15:50:23.106820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:34.623 [2024-12-06 15:50:23.106837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.623 [2024-12-06 15:50:23.106856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.623 [2024-12-06 15:50:23.106909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.623 [2024-12-06 15:50:23.106938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.623 [2024-12-06 15:50:23.106972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.623 [2024-12-06 15:50:23.107002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.623 [2024-12-06 15:50:23.107064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.623 [2024-12-06 15:50:23.107087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.623 [2024-12-06 15:50:23.107098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.623 [2024-12-06 15:50:23.107125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.623 [2024-12-06 15:50:23.107313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.587 ms, result 0 00:23:34.881 00:23:34.881 00:23:34.881 15:50:23 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89230 00:23:34.881 15:50:23 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:34.881 15:50:23 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89230 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89230 ']' 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.881 15:50:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:35.140 [2024-12-06 15:50:23.607277] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:35.140 [2024-12-06 15:50:23.607918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89230 ] 00:23:35.140 [2024-12-06 15:50:23.766491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.140 [2024-12-06 15:50:23.805752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.075 15:50:24 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.075 15:50:24 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:36.075 15:50:24 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:36.075 [2024-12-06 15:50:24.699023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.075 [2024-12-06 15:50:24.699115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.335 [2024-12-06 15:50:24.871721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.871806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.335 [2024-12-06 15:50:24.871827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.335 [2024-12-06 15:50:24.871841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.874765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.874827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.335 [2024-12-06 15:50:24.874844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:23:36.335 [2024-12-06 15:50:24.874857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.875003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.335 [2024-12-06 15:50:24.875277] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.335 [2024-12-06 15:50:24.875303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.875328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.335 [2024-12-06 15:50:24.875342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:36.335 [2024-12-06 15:50:24.875354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.878142] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.335 [2024-12-06 15:50:24.881547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.881587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.335 [2024-12-06 15:50:24.881623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:23:36.335 [2024-12-06 15:50:24.881635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.881712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.881730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.335 [2024-12-06 15:50:24.881748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:36.335 [2024-12-06 15:50:24.881759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.891629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.891670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.335 [2024-12-06 15:50:24.891709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.806 ms 00:23:36.335 [2024-12-06 15:50:24.891720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.891895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.891917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.335 [2024-12-06 15:50:24.891983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:36.335 [2024-12-06 15:50:24.892008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.892083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.892099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.335 [2024-12-06 15:50:24.892128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:36.335 [2024-12-06 15:50:24.892141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.892188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.335 [2024-12-06 15:50:24.894794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.895059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.335 [2024-12-06 15:50:24.895094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:23:36.335 [2024-12-06 15:50:24.895130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.895191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.895218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.335 [2024-12-06 15:50:24.895233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.335 [2024-12-06 15:50:24.895251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.895285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.335 [2024-12-06 15:50:24.895346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.335 [2024-12-06 15:50:24.895413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.335 [2024-12-06 15:50:24.895449] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.335 [2024-12-06 15:50:24.895579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.335 [2024-12-06 15:50:24.895602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.335 [2024-12-06 15:50:24.895618] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.335 [2024-12-06 15:50:24.895638] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.335 [2024-12-06 15:50:24.895652] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.335 [2024-12-06 15:50:24.895688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.335 [2024-12-06 15:50:24.895700] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.335 [2024-12-06 15:50:24.895717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.335 [2024-12-06 15:50:24.895733] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.335 [2024-12-06 15:50:24.895750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.895762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.335 [2024-12-06 15:50:24.895779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:23:36.335 [2024-12-06 15:50:24.895791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.895910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.335 [2024-12-06 15:50:24.895928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.335 [2024-12-06 15:50:24.895981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:36.335 [2024-12-06 15:50:24.895996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.335 [2024-12-06 15:50:24.896117] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.336 [2024-12-06 15:50:24.896135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.336 [2024-12-06 15:50:24.896152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.336 [2024-12-06 15:50:24.896248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.336 [2024-12-06 15:50:24.896294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.336 [2024-12-06 15:50:24.896328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.336 [2024-12-06 15:50:24.896342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.336 [2024-12-06 15:50:24.896359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.336 [2024-12-06 15:50:24.896374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.336 [2024-12-06 15:50:24.896391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.336 [2024-12-06 15:50:24.896402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.336 [2024-12-06 15:50:24.896430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.336 [2024-12-06 15:50:24.896519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.336 [2024-12-06 15:50:24.896574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.336 [2024-12-06 15:50:24.896621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.336 [2024-12-06 15:50:24.896663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.336 [2024-12-06 15:50:24.896712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.336 [2024-12-06 15:50:24.896741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.336 [2024-12-06 15:50:24.896754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.336 [2024-12-06 15:50:24.896777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.336 [2024-12-06 15:50:24.896790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.336 [2024-12-06 15:50:24.896822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.336 [2024-12-06 15:50:24.896834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.336 [2024-12-06 15:50:24.896864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.336 [2024-12-06 15:50:24.896896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.336 [2024-12-06 15:50:24.896925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.336 [2024-12-06 15:50:24.896950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.336 [2024-12-06 15:50:24.896972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.336 [2024-12-06 15:50:24.896986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.336 [2024-12-06 15:50:24.897003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.336 [2024-12-06 15:50:24.897015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.336 [2024-12-06 15:50:24.897033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.336 [2024-12-06 15:50:24.897044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.336 [2024-12-06 15:50:24.897063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.336 [2024-12-06 15:50:24.897076] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.336 [2024-12-06 15:50:24.897093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.336 [2024-12-06 15:50:24.897148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.336 [2024-12-06 15:50:24.897159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.336 [2024-12-06 15:50:24.897173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.336 [2024-12-06 15:50:24.897185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.336 [2024-12-06 15:50:24.897198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.336 [2024-12-06 15:50:24.897210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.336 [2024-12-06 15:50:24.897223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.336 [2024-12-06 15:50:24.897234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.336 [2024-12-06 15:50:24.897248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.336 [2024-12-06 15:50:24.897325] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.336 [2024-12-06 15:50:24.897351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.336 [2024-12-06 15:50:24.897378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.336 [2024-12-06 15:50:24.897389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.336 [2024-12-06 15:50:24.897403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.336 [2024-12-06 15:50:24.897415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.897430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.336 [2024-12-06 15:50:24.897441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:23:36.336 [2024-12-06 15:50:24.897457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.919392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.919675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.336 [2024-12-06 15:50:24.919802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.847 ms 00:23:36.336 [2024-12-06 15:50:24.919949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.920190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.920266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.336 [2024-12-06 15:50:24.920369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:36.336 [2024-12-06 15:50:24.920521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.937717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.937911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.336 [2024-12-06 15:50:24.938063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.113 ms 00:23:36.336 [2024-12-06 15:50:24.938126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.938356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.938429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.336 [2024-12-06 15:50:24.938588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.336 [2024-12-06 15:50:24.938670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.939654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.939836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.336 [2024-12-06 15:50:24.939958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:23:36.336 [2024-12-06 15:50:24.940085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.336 [2024-12-06 15:50:24.940328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.336 [2024-12-06 15:50:24.940415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.337 [2024-12-06 15:50:24.940579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:23:36.337 [2024-12-06 15:50:24.940645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.950925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.951161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.337 [2024-12-06 15:50:24.951276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.209 ms 00:23:36.337 [2024-12-06 15:50:24.951341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.962162] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:36.337 [2024-12-06 15:50:24.962370] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.337 [2024-12-06 15:50:24.962486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.962511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.337 [2024-12-06 15:50:24.962525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.862 ms 00:23:36.337 [2024-12-06 15:50:24.962539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.980322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.980380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.337 [2024-12-06 15:50:24.980397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:23:36.337 [2024-12-06 15:50:24.980413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.982526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.982587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.337 [2024-12-06 15:50:24.982603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:23:36.337 [2024-12-06 15:50:24.982616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.984433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.984525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.337 [2024-12-06 15:50:24.984543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:23:36.337 [2024-12-06 15:50:24.984556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:24.985008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:24.985046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.337 [2024-12-06 15:50:24.985061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:23:36.337 [2024-12-06 15:50:24.985075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:25.009085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.337 [2024-12-06 15:50:25.009181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.337 [2024-12-06 15:50:25.009202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.976 ms 00:23:36.337 [2024-12-06 15:50:25.009233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.337 [2024-12-06 15:50:25.016104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:36.595 [2024-12-06 15:50:25.038267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.038330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.595 [2024-12-06 15:50:25.038355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.901 ms 00:23:36.595 [2024-12-06 15:50:25.038366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.038506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.038530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.595 [2024-12-06 15:50:25.038546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.595 [2024-12-06 15:50:25.038557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.038649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.038665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.595 [2024-12-06 15:50:25.038682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:36.595 [2024-12-06 15:50:25.038693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.038730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.038744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.595 [2024-12-06 15:50:25.038765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:36.595 [2024-12-06 15:50:25.038776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.038840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.595 [2024-12-06 15:50:25.038856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.038870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.595 [2024-12-06 15:50:25.038881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:36.595 [2024-12-06 15:50:25.038893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.043518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.043566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.595 [2024-12-06 15:50:25.043586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:23:36.595 [2024-12-06 15:50:25.043600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.043687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.595 [2024-12-06 15:50:25.043710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.595 [2024-12-06 15:50:25.043722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:36.595 [2024-12-06 15:50:25.043734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.595 [2024-12-06 15:50:25.045422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.595 [2024-12-06 15:50:25.046748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 173.203 ms, result 0 00:23:36.595 [2024-12-06 15:50:25.048514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.595 Some configs were skipped because the RPC state that can call them passed over. 00:23:36.595 15:50:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:36.853 [2024-12-06 15:50:25.337304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.853 [2024-12-06 15:50:25.337358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.853 [2024-12-06 15:50:25.337412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:23:36.853 [2024-12-06 15:50:25.337424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.853 [2024-12-06 15:50:25.337472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.965 ms, result 0 00:23:36.853 true 00:23:36.853 15:50:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:37.111 [2024-12-06 15:50:25.584958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.111 [2024-12-06 15:50:25.585220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:37.111 [2024-12-06 15:50:25.585351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:23:37.111 [2024-12-06 15:50:25.585403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.111 [2024-12-06 15:50:25.585485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.750 ms, result 0 00:23:37.111 true 00:23:37.111 15:50:25 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89230 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89230 ']' 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89230 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89230 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89230' 00:23:37.111 killing process with pid 89230 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89230 00:23:37.111 15:50:25 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89230 00:23:37.371 [2024-12-06 15:50:25.885412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.885501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:37.371 [2024-12-06 15:50:25.885526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:37.371 [2024-12-06 15:50:25.885537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.885576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:37.371 [2024-12-06 15:50:25.886830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.886867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:37.371 [2024-12-06 15:50:25.886882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:23:37.371 [2024-12-06 15:50:25.886894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.887210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.887237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:37.371 [2024-12-06 15:50:25.887250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:37.371 [2024-12-06 15:50:25.887261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.890755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.890815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:37.371 [2024-12-06 15:50:25.890832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:23:37.371 [2024-12-06 15:50:25.890849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.897027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.897080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:37.371 [2024-12-06 15:50:25.897096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:23:37.371 [2024-12-06 15:50:25.897111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.898696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.371 [2024-12-06 15:50:25.898756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:37.371 [2024-12-06 15:50:25.898787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:23:37.371 [2024-12-06 15:50:25.898798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.371 [2024-12-06 15:50:25.903557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.903763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:37.372 [2024-12-06 15:50:25.903793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:23:37.372 [2024-12-06 15:50:25.903809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.903968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.903993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.372 [2024-12-06 15:50:25.904015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:37.372 [2024-12-06 15:50:25.904032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.906464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.906506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:37.372 [2024-12-06 15:50:25.906521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:23:37.372 [2024-12-06 15:50:25.906535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.908418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.908646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:37.372 [2024-12-06 15:50:25.908771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:23:37.372 [2024-12-06 15:50:25.908840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.910363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.910437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.372 [2024-12-06 15:50:25.910453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:23:37.372 [2024-12-06 15:50:25.910482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.911935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.372 [2024-12-06 15:50:25.912043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.372 [2024-12-06 15:50:25.912060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:23:37.372 [2024-12-06 15:50:25.912075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.372 [2024-12-06 15:50:25.912118] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.372 [2024-12-06 15:50:25.912156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.912993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.372 [2024-12-06 15:50:25.913242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.373 [2024-12-06 15:50:25.913704] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.373 [2024-12-06 15:50:25.913716] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:37.373 [2024-12-06 15:50:25.913742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:37.373 [2024-12-06 15:50:25.913754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:37.373 [2024-12-06 15:50:25.913770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:37.373 [2024-12-06 15:50:25.913782] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:37.373 [2024-12-06 15:50:25.913813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.373 [2024-12-06 15:50:25.913825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.373 [2024-12-06 15:50:25.913839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.373 [2024-12-06 15:50:25.913848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.373 [2024-12-06 15:50:25.913860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.373 [2024-12-06 15:50:25.913871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.373 [2024-12-06 15:50:25.913885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.373 [2024-12-06 15:50:25.913897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:23:37.373 [2024-12-06 15:50:25.913923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.916150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.373 [2024-12-06 15:50:25.916199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.373 [2024-12-06 15:50:25.916214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:23:37.373 [2024-12-06 15:50:25.916228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.916371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.373 [2024-12-06 15:50:25.916389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.373 [2024-12-06 15:50:25.916401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:37.373 [2024-12-06 15:50:25.916418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.926506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.926573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.373 [2024-12-06 15:50:25.926590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.926604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.926710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.926733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.373 [2024-12-06 15:50:25.926747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.926767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.926825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.926847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.373 [2024-12-06 15:50:25.926859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.926871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.926898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.926914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.373 [2024-12-06 15:50:25.926934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.926964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.944107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.944434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.373 [2024-12-06 15:50:25.944486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.944536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.957412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.957501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.373 [2024-12-06 15:50:25.957521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.957539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.957643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.957667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.373 [2024-12-06 15:50:25.957680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.957694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.957736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.957755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.373 [2024-12-06 15:50:25.957767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.957781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.957895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.957922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.373 [2024-12-06 15:50:25.957935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.957964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.958044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.958069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:37.373 [2024-12-06 15:50:25.958082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.958098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.958152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.958175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.373 [2024-12-06 15:50:25.958186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.958198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.958258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.373 [2024-12-06 15:50:25.958278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.373 [2024-12-06 15:50:25.958291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.373 [2024-12-06 15:50:25.958304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.373 [2024-12-06 15:50:25.958514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.065 ms, result 0 00:23:37.632 15:50:26 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:37.632 15:50:26 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:37.890 [2024-12-06 15:50:26.372370] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:37.890 [2024-12-06 15:50:26.372575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89271 ] 00:23:37.890 [2024-12-06 15:50:26.528145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.890 [2024-12-06 15:50:26.575583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.149 [2024-12-06 15:50:26.725892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.149 [2024-12-06 15:50:26.726011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.409 [2024-12-06 15:50:26.885361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.409 [2024-12-06 15:50:26.885416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.409 [2024-12-06 15:50:26.885436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:38.409 [2024-12-06 15:50:26.885446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.888044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.888244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.410 [2024-12-06 15:50:26.888271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:23:38.410 [2024-12-06 15:50:26.888295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.888439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.410 [2024-12-06 15:50:26.888784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.410 [2024-12-06 15:50:26.888811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.888823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.410 [2024-12-06 15:50:26.888835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:23:38.410 [2024-12-06 15:50:26.888846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.890886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.410 [2024-12-06 15:50:26.894320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.894359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.410 [2024-12-06 15:50:26.894380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:23:38.410 [2024-12-06 15:50:26.894391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.894465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.894482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.410 [2024-12-06 15:50:26.894504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:38.410 [2024-12-06 15:50:26.894514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.905570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.905612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.410 [2024-12-06 15:50:26.905628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.003 ms 00:23:38.410 [2024-12-06 15:50:26.905638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.905792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.905813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.410 [2024-12-06 15:50:26.905834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:38.410 [2024-12-06 15:50:26.905844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.905886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.905901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.410 [2024-12-06 15:50:26.905912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.410 [2024-12-06 15:50:26.905922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.905993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:38.410 [2024-12-06 15:50:26.908475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.908510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.410 [2024-12-06 15:50:26.908542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:23:38.410 [2024-12-06 15:50:26.908558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.908608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.908627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.410 [2024-12-06 15:50:26.908640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:38.410 [2024-12-06 15:50:26.908650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.908685] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.410 [2024-12-06 15:50:26.908717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.410 [2024-12-06 15:50:26.908763] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.410 [2024-12-06 15:50:26.908804] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:38.410 [2024-12-06 15:50:26.908902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.410 [2024-12-06 15:50:26.908916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.410 [2024-12-06 15:50:26.908929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.410 [2024-12-06 15:50:26.908941] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.410 [2024-12-06 15:50:26.908953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.410 [2024-12-06 15:50:26.908964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:38.410 [2024-12-06 15:50:26.908994] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.410 [2024-12-06 15:50:26.909004] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.410 [2024-12-06 15:50:26.909013] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.410 [2024-12-06 15:50:26.909033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.909044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.410 [2024-12-06 15:50:26.909055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:23:38.410 [2024-12-06 15:50:26.909064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.909166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.410 [2024-12-06 15:50:26.909192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.410 [2024-12-06 15:50:26.909211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:38.410 [2024-12-06 15:50:26.909221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.410 [2024-12-06 15:50:26.909338] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.410 [2024-12-06 15:50:26.909359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.410 [2024-12-06 15:50:26.909371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.410 [2024-12-06 15:50:26.909418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.410 [2024-12-06 15:50:26.909454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.410 [2024-12-06 15:50:26.909478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.410 [2024-12-06 15:50:26.909487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:38.410 [2024-12-06 15:50:26.909496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.410 [2024-12-06 15:50:26.909506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.410 [2024-12-06 15:50:26.909516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:38.410 [2024-12-06 15:50:26.909528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.410 [2024-12-06 15:50:26.909549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.410 [2024-12-06 15:50:26.909579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.410 [2024-12-06 15:50:26.909615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.410 [2024-12-06 15:50:26.909644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.410 [2024-12-06 15:50:26.909673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.410 [2024-12-06 15:50:26.909692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.410 [2024-12-06 15:50:26.909702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.410 [2024-12-06 15:50:26.909737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.410 [2024-12-06 15:50:26.909747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:38.410 [2024-12-06 15:50:26.909756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.410 [2024-12-06 15:50:26.909766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.410 [2024-12-06 15:50:26.909776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:38.410 [2024-12-06 15:50:26.909789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.410 [2024-12-06 15:50:26.909799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.410 [2024-12-06 15:50:26.909810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:38.410 [2024-12-06 15:50:26.909819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.411 [2024-12-06 15:50:26.909829] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.411 [2024-12-06 15:50:26.909840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.411 [2024-12-06 15:50:26.909851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.411 [2024-12-06 15:50:26.909862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.411 [2024-12-06 15:50:26.909876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.411 [2024-12-06 15:50:26.909887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.411 [2024-12-06 15:50:26.909897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.411 [2024-12-06 15:50:26.909907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.411 [2024-12-06 15:50:26.909917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.411 [2024-12-06 15:50:26.909927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.411 [2024-12-06 15:50:26.909938] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.411 [2024-12-06 15:50:26.909968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.909990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:38.411 [2024-12-06 15:50:26.910002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:38.411 [2024-12-06 15:50:26.910030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:38.411 [2024-12-06 15:50:26.910042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:38.411 [2024-12-06 15:50:26.910068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:38.411 [2024-12-06 15:50:26.910079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:38.411 [2024-12-06 15:50:26.910090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:38.411 [2024-12-06 15:50:26.910115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:38.411 [2024-12-06 15:50:26.910126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:38.411 [2024-12-06 15:50:26.910137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:38.411 [2024-12-06 15:50:26.910191] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.411 [2024-12-06 15:50:26.910209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.411 [2024-12-06 15:50:26.910237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.411 [2024-12-06 15:50:26.910248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.411 [2024-12-06 15:50:26.910259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.411 [2024-12-06 15:50:26.910270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.910282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.411 [2024-12-06 15:50:26.910303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:23:38.411 [2024-12-06 15:50:26.910314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.929039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.929348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.411 [2024-12-06 15:50:26.929379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.601 ms 00:23:38.411 [2024-12-06 15:50:26.929393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.929582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.929610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.411 [2024-12-06 15:50:26.929635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:38.411 [2024-12-06 15:50:26.929647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.954984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.955031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.411 [2024-12-06 15:50:26.955049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.290 ms 00:23:38.411 [2024-12-06 15:50:26.955072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.955191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.955210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.411 [2024-12-06 15:50:26.955222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:38.411 [2024-12-06 15:50:26.955233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.956072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.956106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.411 [2024-12-06 15:50:26.956123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:23:38.411 [2024-12-06 15:50:26.956134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.956353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.956375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.411 [2024-12-06 15:50:26.956401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:23:38.411 [2024-12-06 15:50:26.956427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.967299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.967338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.411 [2024-12-06 15:50:26.967355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.835 ms 00:23:38.411 [2024-12-06 15:50:26.967371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.970804] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:38.411 [2024-12-06 15:50:26.970844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:38.411 [2024-12-06 15:50:26.970862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.970874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:38.411 [2024-12-06 15:50:26.970885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:23:38.411 [2024-12-06 15:50:26.970894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.984214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.984254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:38.411 [2024-12-06 15:50:26.984286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.251 ms 00:23:38.411 [2024-12-06 15:50:26.984297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.986388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.986427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:38.411 [2024-12-06 15:50:26.986441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:23:38.411 [2024-12-06 15:50:26.986450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.988207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.988251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:38.411 [2024-12-06 15:50:26.988267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:23:38.411 [2024-12-06 15:50:26.988276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:26.988655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:26.988685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.411 [2024-12-06 15:50:26.988706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:38.411 [2024-12-06 15:50:26.988717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:27.013627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:27.013717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:38.411 [2024-12-06 15:50:27.013739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.875 ms 00:23:38.411 [2024-12-06 15:50:27.013750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:27.020634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:38.411 [2024-12-06 15:50:27.040750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:27.041055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.411 [2024-12-06 15:50:27.041090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.875 ms 00:23:38.411 [2024-12-06 15:50:27.041104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:27.041250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:27.041272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:38.411 [2024-12-06 15:50:27.041291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.411 [2024-12-06 15:50:27.041303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.411 [2024-12-06 15:50:27.041408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.411 [2024-12-06 15:50:27.041425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.412 [2024-12-06 15:50:27.041437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:38.412 [2024-12-06 15:50:27.041447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-06 15:50:27.041483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.412 [2024-12-06 15:50:27.041498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.412 [2024-12-06 15:50:27.041510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:38.412 [2024-12-06 15:50:27.041526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-06 15:50:27.041571] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:38.412 [2024-12-06 15:50:27.041587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.412 [2024-12-06 15:50:27.041598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:38.412 [2024-12-06 15:50:27.041609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:38.412 [2024-12-06 15:50:27.041620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-06 15:50:27.046069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.412 [2024-12-06 15:50:27.046109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.412 [2024-12-06 15:50:27.046142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.417 ms 00:23:38.412 [2024-12-06 15:50:27.046168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-06 15:50:27.046266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.412 [2024-12-06 15:50:27.046285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.412 [2024-12-06 15:50:27.046298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:38.412 [2024-12-06 15:50:27.046309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.412 [2024-12-06 15:50:27.047862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.412 [2024-12-06 15:50:27.049150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 162.121 ms, result 0 00:23:38.412 [2024-12-06 15:50:27.050033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.412 [2024-12-06 15:50:27.057882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:39.790  [2024-12-06T15:50:29.420Z] Copying: 24/256 [MB] (24 MBps) [2024-12-06T15:50:30.358Z] Copying: 46/256 [MB] (21 MBps) [2024-12-06T15:50:31.294Z] Copying: 68/256 [MB] (21 MBps) [2024-12-06T15:50:32.231Z] Copying: 89/256 [MB] (21 MBps) [2024-12-06T15:50:33.169Z] Copying: 111/256 [MB] (21 MBps) [2024-12-06T15:50:34.119Z] Copying: 133/256 [MB] (21 MBps) [2024-12-06T15:50:35.068Z] Copying: 154/256 [MB] (21 MBps) [2024-12-06T15:50:36.447Z] Copying: 176/256 [MB] (21 MBps) [2024-12-06T15:50:37.384Z] Copying: 198/256 [MB] (21 MBps) [2024-12-06T15:50:38.321Z] Copying: 220/256 [MB] (22 MBps) [2024-12-06T15:50:38.890Z] Copying: 242/256 [MB] (22 MBps) [2024-12-06T15:50:38.890Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-06 15:50:38.688197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.197 [2024-12-06 15:50:38.690068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.690111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.197 [2024-12-06 15:50:38.690142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:50.197 [2024-12-06 15:50:38.690153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.690189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:50.197 [2024-12-06 15:50:38.691397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.691633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.197 [2024-12-06 15:50:38.691661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:23:50.197 [2024-12-06 15:50:38.691672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.691958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.691994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.197 [2024-12-06 15:50:38.692013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:23:50.197 [2024-12-06 15:50:38.692033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.695108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.695138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.197 [2024-12-06 15:50:38.695151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:23:50.197 [2024-12-06 15:50:38.695161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.701149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.701179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:50.197 [2024-12-06 15:50:38.701193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.967 ms 00:23:50.197 [2024-12-06 15:50:38.701218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.702728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.702783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.197 [2024-12-06 15:50:38.702815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:23:50.197 [2024-12-06 15:50:38.702826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.707583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.707621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.197 [2024-12-06 15:50:38.707647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:23:50.197 [2024-12-06 15:50:38.707656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.707777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.707794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.197 [2024-12-06 15:50:38.707806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:50.197 [2024-12-06 15:50:38.707822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.710217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.710263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:50.197 [2024-12-06 15:50:38.710278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:23:50.197 [2024-12-06 15:50:38.710288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.712037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.712069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:50.197 [2024-12-06 15:50:38.712083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:23:50.197 [2024-12-06 15:50:38.712092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.713344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.713380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.197 [2024-12-06 15:50:38.713410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:23:50.197 [2024-12-06 15:50:38.713420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.714626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.197 [2024-12-06 15:50:38.714679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.197 [2024-12-06 15:50:38.714693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:23:50.197 [2024-12-06 15:50:38.714703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.197 [2024-12-06 15:50:38.714736] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.197 [2024-12-06 15:50:38.714758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.197 [2024-12-06 15:50:38.714838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.714999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.198 [2024-12-06 15:50:38.715857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.199 [2024-12-06 15:50:38.715867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.199 [2024-12-06 15:50:38.715878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.199 [2024-12-06 15:50:38.715888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.199 [2024-12-06 15:50:38.715908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.199 [2024-12-06 15:50:38.715919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:50.199 [2024-12-06 15:50:38.715930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:50.199 [2024-12-06 15:50:38.715951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:50.199 [2024-12-06 15:50:38.715962] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:50.199 [2024-12-06 15:50:38.715973] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:50.199 [2024-12-06 15:50:38.715983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.199 [2024-12-06 15:50:38.716008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.199 [2024-12-06 15:50:38.716026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.199 [2024-12-06 15:50:38.716036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.199 [2024-12-06 15:50:38.716045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.199 [2024-12-06 15:50:38.716056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.199 [2024-12-06 15:50:38.716066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.199 [2024-12-06 15:50:38.716078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:23:50.199 [2024-12-06 15:50:38.716088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.718743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.199 [2024-12-06 15:50:38.718903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:50.199 [2024-12-06 15:50:38.718928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:23:50.199 [2024-12-06 15:50:38.718958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.719098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.199 [2024-12-06 15:50:38.719114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:50.199 [2024-12-06 15:50:38.719127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:50.199 [2024-12-06 15:50:38.719138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.728391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.728433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.199 [2024-12-06 15:50:38.728448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.728465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.728583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.728600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.199 [2024-12-06 15:50:38.728612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.728624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.728686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.728705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.199 [2024-12-06 15:50:38.728717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.728728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.728760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.728774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.199 [2024-12-06 15:50:38.728785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.728796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.744439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.744703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.199 [2024-12-06 15:50:38.744821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.744878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.756435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.756667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.199 [2024-12-06 15:50:38.756696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.756722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.756795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.756812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.199 [2024-12-06 15:50:38.756825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.756837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.756876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.756897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.199 [2024-12-06 15:50:38.756909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.756930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.757062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.757082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.199 [2024-12-06 15:50:38.757110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.757121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.757192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.757226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:50.199 [2024-12-06 15:50:38.757244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.757255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.757318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.757336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.199 [2024-12-06 15:50:38.757347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.757358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.757413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.199 [2024-12-06 15:50:38.757435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.199 [2024-12-06 15:50:38.757446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.199 [2024-12-06 15:50:38.757456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.199 [2024-12-06 15:50:38.757637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.549 ms, result 0 00:23:50.457 00:23:50.457 00:23:50.457 15:50:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:50.457 15:50:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:51.022 15:50:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:51.022 [2024-12-06 15:50:39.626543] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:51.023 [2024-12-06 15:50:39.626722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89408 ] 00:23:51.280 [2024-12-06 15:50:39.778464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.280 [2024-12-06 15:50:39.819316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.280 [2024-12-06 15:50:39.967774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:51.280 [2024-12-06 15:50:39.967902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:51.539 [2024-12-06 15:50:40.126799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.126861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:51.539 [2024-12-06 15:50:40.126882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:51.539 [2024-12-06 15:50:40.126893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.129534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.129580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.539 [2024-12-06 15:50:40.129595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:23:51.539 [2024-12-06 15:50:40.129616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.129732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:51.539 [2024-12-06 15:50:40.130018] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:51.539 [2024-12-06 15:50:40.130045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.130056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.539 [2024-12-06 15:50:40.130067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:23:51.539 [2024-12-06 15:50:40.130077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.132224] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:51.539 [2024-12-06 15:50:40.135398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.135437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:51.539 [2024-12-06 15:50:40.135459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.175 ms 00:23:51.539 [2024-12-06 15:50:40.135479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.135555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.135573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:51.539 [2024-12-06 15:50:40.135585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:51.539 [2024-12-06 15:50:40.135594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.146666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.146708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.539 [2024-12-06 15:50:40.146736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.020 ms 00:23:51.539 [2024-12-06 15:50:40.146746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.146916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.146954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.539 [2024-12-06 15:50:40.146969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:51.539 [2024-12-06 15:50:40.146980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.147024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.147039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:51.539 [2024-12-06 15:50:40.147050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:51.539 [2024-12-06 15:50:40.147075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.147124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:51.539 [2024-12-06 15:50:40.149456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.149499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.539 [2024-12-06 15:50:40.149514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:23:51.539 [2024-12-06 15:50:40.149529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.149575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.149594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:51.539 [2024-12-06 15:50:40.149606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:51.539 [2024-12-06 15:50:40.149615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.149640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:51.539 [2024-12-06 15:50:40.149668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:51.539 [2024-12-06 15:50:40.149712] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:51.539 [2024-12-06 15:50:40.149748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:51.539 [2024-12-06 15:50:40.149842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:51.539 [2024-12-06 15:50:40.149855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:51.539 [2024-12-06 15:50:40.149867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:51.539 [2024-12-06 15:50:40.149890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:51.539 [2024-12-06 15:50:40.149901] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:51.539 [2024-12-06 15:50:40.149920] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:51.539 [2024-12-06 15:50:40.149930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:51.539 [2024-12-06 15:50:40.149958] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:51.539 [2024-12-06 15:50:40.149970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:51.539 [2024-12-06 15:50:40.149989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.150010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:51.539 [2024-12-06 15:50:40.150021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:23:51.539 [2024-12-06 15:50:40.150030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.150123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.539 [2024-12-06 15:50:40.150138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:51.539 [2024-12-06 15:50:40.150149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:51.539 [2024-12-06 15:50:40.150158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.539 [2024-12-06 15:50:40.150260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:51.539 [2024-12-06 15:50:40.150287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:51.539 [2024-12-06 15:50:40.150298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.539 [2024-12-06 15:50:40.150308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.539 [2024-12-06 15:50:40.150327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:51.539 [2024-12-06 15:50:40.150336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:51.539 [2024-12-06 15:50:40.150345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:51.539 [2024-12-06 15:50:40.150358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:51.539 [2024-12-06 15:50:40.150368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:51.539 [2024-12-06 15:50:40.150377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.539 [2024-12-06 15:50:40.150386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:51.539 [2024-12-06 15:50:40.150394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:51.539 [2024-12-06 15:50:40.150402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:51.539 [2024-12-06 15:50:40.150411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:51.539 [2024-12-06 15:50:40.150422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:51.540 [2024-12-06 15:50:40.150432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:51.540 [2024-12-06 15:50:40.150450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:51.540 [2024-12-06 15:50:40.150477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:51.540 [2024-12-06 15:50:40.150510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:51.540 [2024-12-06 15:50:40.150538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:51.540 [2024-12-06 15:50:40.150564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:51.540 [2024-12-06 15:50:40.150591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.540 [2024-12-06 15:50:40.150609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:51.540 [2024-12-06 15:50:40.150617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:51.540 [2024-12-06 15:50:40.150626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:51.540 [2024-12-06 15:50:40.150635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:51.540 [2024-12-06 15:50:40.150643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:51.540 [2024-12-06 15:50:40.150655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:51.540 [2024-12-06 15:50:40.150674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:51.540 [2024-12-06 15:50:40.150684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150693] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:51.540 [2024-12-06 15:50:40.150703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:51.540 [2024-12-06 15:50:40.150713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:51.540 [2024-12-06 15:50:40.150744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:51.540 [2024-12-06 15:50:40.150754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:51.540 [2024-12-06 15:50:40.150764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:51.540 [2024-12-06 15:50:40.150773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:51.540 [2024-12-06 15:50:40.150782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:51.540 [2024-12-06 15:50:40.150791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:51.540 [2024-12-06 15:50:40.150802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:51.540 [2024-12-06 15:50:40.150813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.150828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:51.540 [2024-12-06 15:50:40.150839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:51.540 [2024-12-06 15:50:40.150849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:51.540 [2024-12-06 15:50:40.150858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:51.540 [2024-12-06 15:50:40.150867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:51.540 [2024-12-06 15:50:40.150876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:51.540 [2024-12-06 15:50:40.150886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:51.540 [2024-12-06 15:50:40.150907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:51.540 [2024-12-06 15:50:40.150917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:51.540 [2024-12-06 15:50:40.150926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.150949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.150962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.150972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.150982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:51.540 [2024-12-06 15:50:40.150992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:51.540 [2024-12-06 15:50:40.151007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.151021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:51.540 [2024-12-06 15:50:40.151032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:51.540 [2024-12-06 15:50:40.151042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:51.540 [2024-12-06 15:50:40.151051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:51.540 [2024-12-06 15:50:40.151061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.151071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:51.540 [2024-12-06 15:50:40.151082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:23:51.540 [2024-12-06 15:50:40.151093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.169924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.170224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.540 [2024-12-06 15:50:40.170353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.728 ms 00:23:51.540 [2024-12-06 15:50:40.170403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.170698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.170883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:51.540 [2024-12-06 15:50:40.171041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:51.540 [2024-12-06 15:50:40.171091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.195658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.195870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.540 [2024-12-06 15:50:40.196003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.409 ms 00:23:51.540 [2024-12-06 15:50:40.196053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.196288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.196355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.540 [2024-12-06 15:50:40.196530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:51.540 [2024-12-06 15:50:40.196581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.197260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.197417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.540 [2024-12-06 15:50:40.197539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:23:51.540 [2024-12-06 15:50:40.197648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.197868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.197933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.540 [2024-12-06 15:50:40.198057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:23:51.540 [2024-12-06 15:50:40.198181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.209275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.209445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.540 [2024-12-06 15:50:40.209577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.019 ms 00:23:51.540 [2024-12-06 15:50:40.209634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.213520] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:51.540 [2024-12-06 15:50:40.213716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:51.540 [2024-12-06 15:50:40.213859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.213901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:51.540 [2024-12-06 15:50:40.213948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.903 ms 00:23:51.540 [2024-12-06 15:50:40.214067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.540 [2024-12-06 15:50:40.227703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.540 [2024-12-06 15:50:40.227924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:51.540 [2024-12-06 15:50:40.228086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.522 ms 00:23:51.541 [2024-12-06 15:50:40.228220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.230572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.230806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:51.800 [2024-12-06 15:50:40.230919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:23:51.800 [2024-12-06 15:50:40.231051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.232763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.232991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:51.800 [2024-12-06 15:50:40.233016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:23:51.800 [2024-12-06 15:50:40.233027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.233416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.233461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:51.800 [2024-12-06 15:50:40.233504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:51.800 [2024-12-06 15:50:40.233524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.258770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.258860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:51.800 [2024-12-06 15:50:40.258898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.190 ms 00:23:51.800 [2024-12-06 15:50:40.258910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.266104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:51.800 [2024-12-06 15:50:40.286519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.286598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:51.800 [2024-12-06 15:50:40.286618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.461 ms 00:23:51.800 [2024-12-06 15:50:40.286629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.286765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.286785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:51.800 [2024-12-06 15:50:40.286803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:51.800 [2024-12-06 15:50:40.286813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.286897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.286925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:51.800 [2024-12-06 15:50:40.286959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:51.800 [2024-12-06 15:50:40.286989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.287028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.287055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:51.800 [2024-12-06 15:50:40.287084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:51.800 [2024-12-06 15:50:40.287099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.287148] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:51.800 [2024-12-06 15:50:40.287175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.287187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:51.800 [2024-12-06 15:50:40.287199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:51.800 [2024-12-06 15:50:40.287209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.291734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.291914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:51.800 [2024-12-06 15:50:40.291971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:23:51.800 [2024-12-06 15:50:40.291997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.292119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.800 [2024-12-06 15:50:40.292150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:51.800 [2024-12-06 15:50:40.292163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:51.800 [2024-12-06 15:50:40.292174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.800 [2024-12-06 15:50:40.293750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:51.800 [2024-12-06 15:50:40.294996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.603 ms, result 0 00:23:51.800 [2024-12-06 15:50:40.296019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.800 [2024-12-06 15:50:40.303834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:52.060  [2024-12-06T15:50:40.753Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-06 15:50:40.493778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.060 [2024-12-06 15:50:40.494674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.494732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.060 [2024-12-06 15:50:40.494777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:52.060 [2024-12-06 15:50:40.494788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.494833] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:52.060 [2024-12-06 15:50:40.496097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.496251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.060 [2024-12-06 15:50:40.496373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:23:52.060 [2024-12-06 15:50:40.496394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.498617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.498780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.060 [2024-12-06 15:50:40.498813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:23:52.060 [2024-12-06 15:50:40.498824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.502459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.502496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:52.060 [2024-12-06 15:50:40.502527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.609 ms 00:23:52.060 [2024-12-06 15:50:40.502537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.508420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.508624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:52.060 [2024-12-06 15:50:40.508649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.846 ms 00:23:52.060 [2024-12-06 15:50:40.508678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.510148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.510182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:52.060 [2024-12-06 15:50:40.510212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:23:52.060 [2024-12-06 15:50:40.510220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.514162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.514200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:52.060 [2024-12-06 15:50:40.514213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:23:52.060 [2024-12-06 15:50:40.514224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.514336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.514353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:52.060 [2024-12-06 15:50:40.514370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:52.060 [2024-12-06 15:50:40.514380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.516608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.516772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:52.060 [2024-12-06 15:50:40.516879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:23:52.060 [2024-12-06 15:50:40.516924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.518615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.518810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:52.060 [2024-12-06 15:50:40.518916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:23:52.060 [2024-12-06 15:50:40.519044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.520338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.520563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:52.060 [2024-12-06 15:50:40.520588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:23:52.060 [2024-12-06 15:50:40.520599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.521819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.060 [2024-12-06 15:50:40.521881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:52.060 [2024-12-06 15:50:40.521909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:23:52.060 [2024-12-06 15:50:40.521917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.060 [2024-12-06 15:50:40.521967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:52.060 [2024-12-06 15:50:40.521990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:52.060 [2024-12-06 15:50:40.522003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:52.060 [2024-12-06 15:50:40.522012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:52.060 [2024-12-06 15:50:40.522022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:52.060 [2024-12-06 15:50:40.522032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:52.061 [2024-12-06 15:50:40.522932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.523789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.524004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:52.062 [2024-12-06 15:50:40.524068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:52.062 [2024-12-06 15:50:40.524207] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:52.062 [2024-12-06 15:50:40.524268] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:52.062 [2024-12-06 15:50:40.524302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:52.062 [2024-12-06 15:50:40.524397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:52.062 [2024-12-06 15:50:40.524441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:52.062 [2024-12-06 15:50:40.524503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:52.062 [2024-12-06 15:50:40.524653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:52.062 [2024-12-06 15:50:40.524699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:52.062 [2024-12-06 15:50:40.524733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:52.062 [2024-12-06 15:50:40.524828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:52.062 [2024-12-06 15:50:40.524862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.062 [2024-12-06 15:50:40.524980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:52.062 [2024-12-06 15:50:40.525031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:23:52.062 [2024-12-06 15:50:40.525149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.527586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.062 [2024-12-06 15:50:40.527726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:52.062 [2024-12-06 15:50:40.527832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:23:52.062 [2024-12-06 15:50:40.527974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.528142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.062 [2024-12-06 15:50:40.528202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:52.062 [2024-12-06 15:50:40.528297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:52.062 [2024-12-06 15:50:40.528411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.538041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.538222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.062 [2024-12-06 15:50:40.538331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.538474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.538592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.538614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.062 [2024-12-06 15:50:40.538627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.538652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.538718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.538737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.062 [2024-12-06 15:50:40.538750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.538760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.538792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.538806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.062 [2024-12-06 15:50:40.538816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.538826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.554206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.554267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.062 [2024-12-06 15:50:40.554284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.554301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.565694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.565750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.062 [2024-12-06 15:50:40.565768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.565778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.565844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.565860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.062 [2024-12-06 15:50:40.565871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.565881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.565916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.565967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.062 [2024-12-06 15:50:40.565979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.565989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.566108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.566128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.062 [2024-12-06 15:50:40.566155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.566165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.566216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.566239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:52.062 [2024-12-06 15:50:40.566251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.566261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.566313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.566328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.062 [2024-12-06 15:50:40.566339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.566361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.566473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:52.062 [2024-12-06 15:50:40.566496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.062 [2024-12-06 15:50:40.566508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:52.062 [2024-12-06 15:50:40.566519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.062 [2024-12-06 15:50:40.566708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.986 ms, result 0 00:23:52.321 00:23:52.321 00:23:52.321 15:50:40 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89429 00:23:52.321 15:50:40 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:52.321 15:50:40 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89429 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89429 ']' 00:23:52.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.321 15:50:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:52.321 [2024-12-06 15:50:41.002701] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:52.321 [2024-12-06 15:50:41.002909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89429 ] 00:23:52.596 [2024-12-06 15:50:41.158785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.596 [2024-12-06 15:50:41.197390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.527 15:50:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.527 15:50:41 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:53.527 15:50:41 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:53.527 [2024-12-06 15:50:42.179702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.527 [2024-12-06 15:50:42.179780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:53.786 [2024-12-06 15:50:42.354291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.354504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:53.786 [2024-12-06 15:50:42.354536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:53.786 [2024-12-06 15:50:42.354552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.786 [2024-12-06 15:50:42.357307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.357384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.786 [2024-12-06 15:50:42.357408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:23:53.786 [2024-12-06 15:50:42.357422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.786 [2024-12-06 15:50:42.357578] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:53.786 [2024-12-06 15:50:42.357834] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:53.786 [2024-12-06 15:50:42.357857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.357870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.786 [2024-12-06 15:50:42.357882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:23:53.786 [2024-12-06 15:50:42.357895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.786 [2024-12-06 15:50:42.360349] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:53.786 [2024-12-06 15:50:42.363748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.363923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:53.786 [2024-12-06 15:50:42.364078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:23:53.786 [2024-12-06 15:50:42.364202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.786 [2024-12-06 15:50:42.364318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.364514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:53.786 [2024-12-06 15:50:42.364552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:53.786 [2024-12-06 15:50:42.364576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.786 [2024-12-06 15:50:42.375792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.786 [2024-12-06 15:50:42.375831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.786 [2024-12-06 15:50:42.375850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.145 ms 00:23:53.787 [2024-12-06 15:50:42.375861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.376030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.376064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.787 [2024-12-06 15:50:42.376094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:53.787 [2024-12-06 15:50:42.376109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.376153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.376167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:53.787 [2024-12-06 15:50:42.376185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:53.787 [2024-12-06 15:50:42.376196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.376238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:53.787 [2024-12-06 15:50:42.378887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.378945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.787 [2024-12-06 15:50:42.378975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:23:53.787 [2024-12-06 15:50:42.378989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.379035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.379054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:53.787 [2024-12-06 15:50:42.379066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:53.787 [2024-12-06 15:50:42.379078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.379108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:53.787 [2024-12-06 15:50:42.379138] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:53.787 [2024-12-06 15:50:42.379177] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:53.787 [2024-12-06 15:50:42.379201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:53.787 [2024-12-06 15:50:42.379296] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:53.787 [2024-12-06 15:50:42.379313] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:53.787 [2024-12-06 15:50:42.379327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:53.787 [2024-12-06 15:50:42.379343] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:53.787 [2024-12-06 15:50:42.379356] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:53.787 [2024-12-06 15:50:42.379376] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:53.787 [2024-12-06 15:50:42.379395] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:53.787 [2024-12-06 15:50:42.379409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:53.787 [2024-12-06 15:50:42.379422] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:53.787 [2024-12-06 15:50:42.379435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.379453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:53.787 [2024-12-06 15:50:42.379467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:23:53.787 [2024-12-06 15:50:42.379483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.379572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.787 [2024-12-06 15:50:42.379585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:53.787 [2024-12-06 15:50:42.379599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:53.787 [2024-12-06 15:50:42.379609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.787 [2024-12-06 15:50:42.379716] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:53.787 [2024-12-06 15:50:42.379739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:53.787 [2024-12-06 15:50:42.379754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.787 [2024-12-06 15:50:42.379765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.379782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:53.787 [2024-12-06 15:50:42.379792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.379804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:53.787 [2024-12-06 15:50:42.379815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:53.787 [2024-12-06 15:50:42.379827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:53.787 [2024-12-06 15:50:42.379837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.787 [2024-12-06 15:50:42.379849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:53.787 [2024-12-06 15:50:42.379859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:53.787 [2024-12-06 15:50:42.379871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:53.787 [2024-12-06 15:50:42.379881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:53.787 [2024-12-06 15:50:42.379893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:53.787 [2024-12-06 15:50:42.379903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.379915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:53.787 [2024-12-06 15:50:42.379925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:53.787 [2024-12-06 15:50:42.380153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.380210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:53.787 [2024-12-06 15:50:42.380255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:53.787 [2024-12-06 15:50:42.380292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.787 [2024-12-06 15:50:42.380449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:53.787 [2024-12-06 15:50:42.380533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:53.787 [2024-12-06 15:50:42.380671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.787 [2024-12-06 15:50:42.380722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:53.787 [2024-12-06 15:50:42.380763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:53.787 [2024-12-06 15:50:42.380867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.787 [2024-12-06 15:50:42.380989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:53.787 [2024-12-06 15:50:42.381039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:53.787 [2024-12-06 15:50:42.381081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:53.787 [2024-12-06 15:50:42.381235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:53.787 [2024-12-06 15:50:42.381287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:53.787 [2024-12-06 15:50:42.381325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.787 [2024-12-06 15:50:42.381430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:53.787 [2024-12-06 15:50:42.381478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:53.787 [2024-12-06 15:50:42.381520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:53.787 [2024-12-06 15:50:42.381536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:53.787 [2024-12-06 15:50:42.381551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:53.787 [2024-12-06 15:50:42.381561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.381574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:53.787 [2024-12-06 15:50:42.381585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:53.787 [2024-12-06 15:50:42.381597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.381608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:53.787 [2024-12-06 15:50:42.381632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:53.787 [2024-12-06 15:50:42.381645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:53.787 [2024-12-06 15:50:42.381659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:53.787 [2024-12-06 15:50:42.381670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:53.787 [2024-12-06 15:50:42.381683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:53.787 [2024-12-06 15:50:42.381694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:53.787 [2024-12-06 15:50:42.381708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:53.787 [2024-12-06 15:50:42.381719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:53.787 [2024-12-06 15:50:42.381735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:53.787 [2024-12-06 15:50:42.381748] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:53.787 [2024-12-06 15:50:42.381766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.787 [2024-12-06 15:50:42.381779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:53.787 [2024-12-06 15:50:42.381793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:53.787 [2024-12-06 15:50:42.381804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:53.787 [2024-12-06 15:50:42.381816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:53.787 [2024-12-06 15:50:42.381827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:53.787 [2024-12-06 15:50:42.381840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:53.787 [2024-12-06 15:50:42.381852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:53.788 [2024-12-06 15:50:42.381864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:53.788 [2024-12-06 15:50:42.381875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:53.788 [2024-12-06 15:50:42.381888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.381899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.381938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.381963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.381984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:53.788 [2024-12-06 15:50:42.381996] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:53.788 [2024-12-06 15:50:42.382023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.382036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:53.788 [2024-12-06 15:50:42.382049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:53.788 [2024-12-06 15:50:42.382060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:53.788 [2024-12-06 15:50:42.382073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:53.788 [2024-12-06 15:50:42.382086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.382102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:53.788 [2024-12-06 15:50:42.382114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:23:53.788 [2024-12-06 15:50:42.382127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.404226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.404509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.788 [2024-12-06 15:50:42.404640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.990 ms 00:23:53.788 [2024-12-06 15:50:42.404707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.405020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.405209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:53.788 [2024-12-06 15:50:42.405318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:53.788 [2024-12-06 15:50:42.405432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.424077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.424276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.788 [2024-12-06 15:50:42.424432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.570 ms 00:23:53.788 [2024-12-06 15:50:42.424518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.424736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.424918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.788 [2024-12-06 15:50:42.425044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:53.788 [2024-12-06 15:50:42.425182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.426057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.426221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.788 [2024-12-06 15:50:42.426342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:23:53.788 [2024-12-06 15:50:42.426451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.426665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.426732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.788 [2024-12-06 15:50:42.426833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:23:53.788 [2024-12-06 15:50:42.426982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.439878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.440075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.788 [2024-12-06 15:50:42.440202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.734 ms 00:23:53.788 [2024-12-06 15:50:42.440257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.454566] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:53.788 [2024-12-06 15:50:42.454808] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:53.788 [2024-12-06 15:50:42.455011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.455217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:53.788 [2024-12-06 15:50:42.455286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:23:53.788 [2024-12-06 15:50:42.455485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.471835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.472027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:53.788 [2024-12-06 15:50:42.472143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.214 ms 00:23:53.788 [2024-12-06 15:50:42.472198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.474535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.788 [2024-12-06 15:50:42.474709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:53.788 [2024-12-06 15:50:42.474833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:23:53.788 [2024-12-06 15:50:42.474902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.788 [2024-12-06 15:50:42.476931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.477148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:54.047 [2024-12-06 15:50:42.477177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.922 ms 00:23:54.047 [2024-12-06 15:50:42.477207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.477668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.477702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:54.047 [2024-12-06 15:50:42.477719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:54.047 [2024-12-06 15:50:42.477733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.505101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.505477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:54.047 [2024-12-06 15:50:42.505608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.335 ms 00:23:54.047 [2024-12-06 15:50:42.505667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.512831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:54.047 [2024-12-06 15:50:42.535442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.535697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:54.047 [2024-12-06 15:50:42.535833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.403 ms 00:23:54.047 [2024-12-06 15:50:42.535883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.536065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.536123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:54.047 [2024-12-06 15:50:42.536172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:54.047 [2024-12-06 15:50:42.536273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.536421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.536503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:54.047 [2024-12-06 15:50:42.536616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:54.047 [2024-12-06 15:50:42.536666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.536817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.536872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:54.047 [2024-12-06 15:50:42.536922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:54.047 [2024-12-06 15:50:42.537006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.537112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:54.047 [2024-12-06 15:50:42.537222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.537279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:54.047 [2024-12-06 15:50:42.537318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:54.047 [2024-12-06 15:50:42.537356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.541862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.542052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:54.047 [2024-12-06 15:50:42.542185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:23:54.047 [2024-12-06 15:50:42.542244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.542362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.047 [2024-12-06 15:50:42.542461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:54.047 [2024-12-06 15:50:42.542571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:54.047 [2024-12-06 15:50:42.542623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.047 [2024-12-06 15:50:42.544410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.047 [2024-12-06 15:50:42.545765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.670 ms, result 0 00:23:54.047 [2024-12-06 15:50:42.547023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:54.047 Some configs were skipped because the RPC state that can call them passed over. 00:23:54.047 15:50:42 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:54.305 [2024-12-06 15:50:42.828208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.305 [2024-12-06 15:50:42.828489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:54.305 [2024-12-06 15:50:42.828528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:23:54.305 [2024-12-06 15:50:42.828543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.306 [2024-12-06 15:50:42.828597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.975 ms, result 0 00:23:54.306 true 00:23:54.306 15:50:42 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:54.564 [2024-12-06 15:50:43.032048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.564 [2024-12-06 15:50:43.032100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:54.564 [2024-12-06 15:50:43.032118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:23:54.564 [2024-12-06 15:50:43.032131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.564 [2024-12-06 15:50:43.032176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.400 ms, result 0 00:23:54.564 true 00:23:54.564 15:50:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89429 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89429 ']' 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89429 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89429 00:23:54.564 killing process with pid 89429 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89429' 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89429 00:23:54.564 15:50:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89429 00:23:54.824 [2024-12-06 15:50:43.308409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.308537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:54.824 [2024-12-06 15:50:43.308565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:54.824 [2024-12-06 15:50:43.308578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.308625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:54.824 [2024-12-06 15:50:43.309917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.309965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:54.824 [2024-12-06 15:50:43.310007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:23:54.824 [2024-12-06 15:50:43.310022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.310371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.310401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:54.824 [2024-12-06 15:50:43.310416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:54.824 [2024-12-06 15:50:43.310429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.314024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.314069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:54.824 [2024-12-06 15:50:43.314086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:23:54.824 [2024-12-06 15:50:43.314105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.320279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.320320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:54.824 [2024-12-06 15:50:43.320343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:23:54.824 [2024-12-06 15:50:43.320358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.321962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.322169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:54.824 [2024-12-06 15:50:43.322199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:23:54.824 [2024-12-06 15:50:43.322214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.326860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.326905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:54.824 [2024-12-06 15:50:43.326921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.597 ms 00:23:54.824 [2024-12-06 15:50:43.326948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.327078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.327099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:54.824 [2024-12-06 15:50:43.327111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:54.824 [2024-12-06 15:50:43.327125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.329452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.329496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:54.824 [2024-12-06 15:50:43.329510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:23:54.824 [2024-12-06 15:50:43.329526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.331244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.331287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:54.824 [2024-12-06 15:50:43.331302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:23:54.824 [2024-12-06 15:50:43.331314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.332609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.332803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:54.824 [2024-12-06 15:50:43.332828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:23:54.824 [2024-12-06 15:50:43.332842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.334199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.824 [2024-12-06 15:50:43.334234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:54.824 [2024-12-06 15:50:43.334248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:23:54.824 [2024-12-06 15:50:43.334260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.824 [2024-12-06 15:50:43.334298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:54.824 [2024-12-06 15:50:43.334325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:54.824 [2024-12-06 15:50:43.334347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:54.824 [2024-12-06 15:50:43.334364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:54.824 [2024-12-06 15:50:43.334374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:54.824 [2024-12-06 15:50:43.334389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.334999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:54.825 [2024-12-06 15:50:43.335298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:54.826 [2024-12-06 15:50:43.335538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:54.826 [2024-12-06 15:50:43.335549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:23:54.826 [2024-12-06 15:50:43.335561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:54.826 [2024-12-06 15:50:43.335574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:54.826 [2024-12-06 15:50:43.335585] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:54.826 [2024-12-06 15:50:43.335596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:54.826 [2024-12-06 15:50:43.335619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:54.826 [2024-12-06 15:50:43.335635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:54.826 [2024-12-06 15:50:43.335647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:54.826 [2024-12-06 15:50:43.335657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:54.826 [2024-12-06 15:50:43.335668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:54.826 [2024-12-06 15:50:43.335678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.826 [2024-12-06 15:50:43.335690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:54.826 [2024-12-06 15:50:43.335702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:23:54.826 [2024-12-06 15:50:43.335718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.338510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.826 [2024-12-06 15:50:43.338683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:54.826 [2024-12-06 15:50:43.338816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:23:54.826 [2024-12-06 15:50:43.338871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.339255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:54.826 [2024-12-06 15:50:43.339384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:54.826 [2024-12-06 15:50:43.339488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:23:54.826 [2024-12-06 15:50:43.339621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.350770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.350976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:54.826 [2024-12-06 15:50:43.351090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.351146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.351353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.351512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:54.826 [2024-12-06 15:50:43.351627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.351749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.351866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.352025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:54.826 [2024-12-06 15:50:43.352083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.352201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.352335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.352401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:54.826 [2024-12-06 15:50:43.352577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.352635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.371095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.371406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:54.826 [2024-12-06 15:50:43.371532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.371601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.385642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.385846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:54.826 [2024-12-06 15:50:43.385971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.386044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.386235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.386392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:54.826 [2024-12-06 15:50:43.386506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.386627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.386686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.386707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:54.826 [2024-12-06 15:50:43.386721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.386735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.386838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.386862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:54.826 [2024-12-06 15:50:43.386885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.386915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.386988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.387013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:54.826 [2024-12-06 15:50:43.387026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.387043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.387097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.387126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:54.826 [2024-12-06 15:50:43.387141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.826 [2024-12-06 15:50:43.387154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.826 [2024-12-06 15:50:43.387214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:54.826 [2024-12-06 15:50:43.387234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:54.826 [2024-12-06 15:50:43.387247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:54.827 [2024-12-06 15:50:43.387260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:54.827 [2024-12-06 15:50:43.387455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.998 ms, result 0 00:23:55.085 15:50:43 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:55.343 [2024-12-06 15:50:43.809433] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:55.343 [2024-12-06 15:50:43.809862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89465 ] 00:23:55.343 [2024-12-06 15:50:43.964425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.343 [2024-12-06 15:50:44.002318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.600 [2024-12-06 15:50:44.134398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:55.601 [2024-12-06 15:50:44.134786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:55.860 [2024-12-06 15:50:44.292621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.860 [2024-12-06 15:50:44.292867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:55.860 [2024-12-06 15:50:44.292908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:55.860 [2024-12-06 15:50:44.292921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.860 [2024-12-06 15:50:44.295563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.860 [2024-12-06 15:50:44.295608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.860 [2024-12-06 15:50:44.295622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:23:55.861 [2024-12-06 15:50:44.295631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.295723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:55.861 [2024-12-06 15:50:44.296216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:55.861 [2024-12-06 15:50:44.296380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.296429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.861 [2024-12-06 15:50:44.296538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:23:55.861 [2024-12-06 15:50:44.296648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.298615] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:55.861 [2024-12-06 15:50:44.301912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.301958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:55.861 [2024-12-06 15:50:44.301979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:23:55.861 [2024-12-06 15:50:44.301998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.302071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.302089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:55.861 [2024-12-06 15:50:44.302101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:55.861 [2024-12-06 15:50:44.302119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.311864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.312104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.861 [2024-12-06 15:50:44.312129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.693 ms 00:23:55.861 [2024-12-06 15:50:44.312152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.312327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.312348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.861 [2024-12-06 15:50:44.312361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:55.861 [2024-12-06 15:50:44.312371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.312431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.312460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:55.861 [2024-12-06 15:50:44.312493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:55.861 [2024-12-06 15:50:44.312505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.312540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:55.861 [2024-12-06 15:50:44.314824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.314856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.861 [2024-12-06 15:50:44.314880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:23:55.861 [2024-12-06 15:50:44.314894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.314952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.314974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:55.861 [2024-12-06 15:50:44.314986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:55.861 [2024-12-06 15:50:44.315005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.315035] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:55.861 [2024-12-06 15:50:44.315064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:55.861 [2024-12-06 15:50:44.315105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:55.861 [2024-12-06 15:50:44.315134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:55.861 [2024-12-06 15:50:44.315228] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:55.861 [2024-12-06 15:50:44.315243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:55.861 [2024-12-06 15:50:44.315256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:55.861 [2024-12-06 15:50:44.315269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315280] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315291] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:55.861 [2024-12-06 15:50:44.315301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:55.861 [2024-12-06 15:50:44.315311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:55.861 [2024-12-06 15:50:44.315321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:55.861 [2024-12-06 15:50:44.315340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.315351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:55.861 [2024-12-06 15:50:44.315361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:55.861 [2024-12-06 15:50:44.315379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.315469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.861 [2024-12-06 15:50:44.315484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:55.861 [2024-12-06 15:50:44.315505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:55.861 [2024-12-06 15:50:44.315515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.861 [2024-12-06 15:50:44.315613] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:55.861 [2024-12-06 15:50:44.315637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:55.861 [2024-12-06 15:50:44.315648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:55.861 [2024-12-06 15:50:44.315678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:55.861 [2024-12-06 15:50:44.315711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.861 [2024-12-06 15:50:44.315729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:55.861 [2024-12-06 15:50:44.315738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:55.861 [2024-12-06 15:50:44.315747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.861 [2024-12-06 15:50:44.315757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:55.861 [2024-12-06 15:50:44.315766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:55.861 [2024-12-06 15:50:44.315776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:55.861 [2024-12-06 15:50:44.315795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:55.861 [2024-12-06 15:50:44.315822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:55.861 [2024-12-06 15:50:44.315854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:55.861 [2024-12-06 15:50:44.315881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:55.861 [2024-12-06 15:50:44.315907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:55.861 [2024-12-06 15:50:44.315916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.861 [2024-12-06 15:50:44.315924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:55.861 [2024-12-06 15:50:44.315933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:55.861 [2024-12-06 15:50:44.316199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.861 [2024-12-06 15:50:44.316239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:55.861 [2024-12-06 15:50:44.316272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:55.861 [2024-12-06 15:50:44.316304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.861 [2024-12-06 15:50:44.316410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:55.861 [2024-12-06 15:50:44.316455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:55.861 [2024-12-06 15:50:44.316508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.316557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:55.861 [2024-12-06 15:50:44.316685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:55.861 [2024-12-06 15:50:44.316735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.316816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:55.861 [2024-12-06 15:50:44.316857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:55.861 [2024-12-06 15:50:44.316891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.861 [2024-12-06 15:50:44.317016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.861 [2024-12-06 15:50:44.317062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:55.862 [2024-12-06 15:50:44.317096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:55.862 [2024-12-06 15:50:44.317204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:55.862 [2024-12-06 15:50:44.317251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:55.862 [2024-12-06 15:50:44.317283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:55.862 [2024-12-06 15:50:44.317382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:55.862 [2024-12-06 15:50:44.317475] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:55.862 [2024-12-06 15:50:44.317592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:55.862 [2024-12-06 15:50:44.317770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:55.862 [2024-12-06 15:50:44.317780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:55.862 [2024-12-06 15:50:44.317791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:55.862 [2024-12-06 15:50:44.317801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:55.862 [2024-12-06 15:50:44.317811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:55.862 [2024-12-06 15:50:44.317821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:55.862 [2024-12-06 15:50:44.317842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:55.862 [2024-12-06 15:50:44.317853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:55.862 [2024-12-06 15:50:44.317863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:55.862 [2024-12-06 15:50:44.317915] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:55.862 [2024-12-06 15:50:44.317932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:55.862 [2024-12-06 15:50:44.317974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:55.862 [2024-12-06 15:50:44.317984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:55.862 [2024-12-06 15:50:44.317995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:55.862 [2024-12-06 15:50:44.318007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.318033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:55.862 [2024-12-06 15:50:44.318045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:23:55.862 [2024-12-06 15:50:44.318056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.333825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.333885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:55.862 [2024-12-06 15:50:44.333901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.668 ms 00:23:55.862 [2024-12-06 15:50:44.333912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.334118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.334143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:55.862 [2024-12-06 15:50:44.334156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:55.862 [2024-12-06 15:50:44.334166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.354737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.354787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.862 [2024-12-06 15:50:44.354804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.540 ms 00:23:55.862 [2024-12-06 15:50:44.354815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.354916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.354934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.862 [2024-12-06 15:50:44.354986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:55.862 [2024-12-06 15:50:44.354997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.355573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.355611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.862 [2024-12-06 15:50:44.355625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:23:55.862 [2024-12-06 15:50:44.355636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.355802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.355822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.862 [2024-12-06 15:50:44.355833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:55.862 [2024-12-06 15:50:44.355843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.365007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.365042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.862 [2024-12-06 15:50:44.365056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.136 ms 00:23:55.862 [2024-12-06 15:50:44.365071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.367917] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:55.862 [2024-12-06 15:50:44.367971] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:55.862 [2024-12-06 15:50:44.367996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.368007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:55.862 [2024-12-06 15:50:44.368018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:23:55.862 [2024-12-06 15:50:44.368028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.381235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.381492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:55.862 [2024-12-06 15:50:44.381529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.154 ms 00:23:55.862 [2024-12-06 15:50:44.381542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.383504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.383540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:55.862 [2024-12-06 15:50:44.383555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:23:55.862 [2024-12-06 15:50:44.383564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.385081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.385123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:55.862 [2024-12-06 15:50:44.385138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:23:55.862 [2024-12-06 15:50:44.385148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.385483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.385502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:55.862 [2024-12-06 15:50:44.385514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:23:55.862 [2024-12-06 15:50:44.385524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.409338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.409405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:55.862 [2024-12-06 15:50:44.409424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.783 ms 00:23:55.862 [2024-12-06 15:50:44.409435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.416081] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:55.862 [2024-12-06 15:50:44.434650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.435017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:55.862 [2024-12-06 15:50:44.435045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.107 ms 00:23:55.862 [2024-12-06 15:50:44.435058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.435192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.435212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:55.862 [2024-12-06 15:50:44.435230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:55.862 [2024-12-06 15:50:44.435242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.435319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.435336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:55.862 [2024-12-06 15:50:44.435364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:55.862 [2024-12-06 15:50:44.435389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.435432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.862 [2024-12-06 15:50:44.435449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:55.862 [2024-12-06 15:50:44.435460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:55.862 [2024-12-06 15:50:44.435474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.862 [2024-12-06 15:50:44.435518] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:55.863 [2024-12-06 15:50:44.435535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.863 [2024-12-06 15:50:44.435546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:55.863 [2024-12-06 15:50:44.435557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:55.863 [2024-12-06 15:50:44.435566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.863 [2024-12-06 15:50:44.439960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.863 [2024-12-06 15:50:44.440004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:55.863 [2024-12-06 15:50:44.440022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:23:55.863 [2024-12-06 15:50:44.440033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.863 [2024-12-06 15:50:44.440124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.863 [2024-12-06 15:50:44.440143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:55.863 [2024-12-06 15:50:44.440154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:55.863 [2024-12-06 15:50:44.440176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.863 [2024-12-06 15:50:44.441578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:55.863 [2024-12-06 15:50:44.442661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.598 ms, result 0 00:23:55.863 [2024-12-06 15:50:44.443621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:55.863 [2024-12-06 15:50:44.451648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.235  [2024-12-06T15:50:46.861Z] Copying: 27/256 [MB] (27 MBps) [2024-12-06T15:50:47.796Z] Copying: 52/256 [MB] (24 MBps) [2024-12-06T15:50:48.730Z] Copying: 77/256 [MB] (24 MBps) [2024-12-06T15:50:49.675Z] Copying: 102/256 [MB] (24 MBps) [2024-12-06T15:50:50.610Z] Copying: 126/256 [MB] (24 MBps) [2024-12-06T15:50:51.546Z] Copying: 150/256 [MB] (24 MBps) [2024-12-06T15:50:52.920Z] Copying: 175/256 [MB] (24 MBps) [2024-12-06T15:50:53.853Z] Copying: 200/256 [MB] (24 MBps) [2024-12-06T15:50:54.786Z] Copying: 225/256 [MB] (25 MBps) [2024-12-06T15:50:54.786Z] Copying: 250/256 [MB] (25 MBps) [2024-12-06T15:50:55.046Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-06 15:50:54.849614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:06.353 [2024-12-06 15:50:54.851556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.851605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:06.353 [2024-12-06 15:50:54.851625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:06.353 [2024-12-06 15:50:54.851637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.851669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:06.353 [2024-12-06 15:50:54.852837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.852871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:06.353 [2024-12-06 15:50:54.852885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:24:06.353 [2024-12-06 15:50:54.852896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.853185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.853205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:06.353 [2024-12-06 15:50:54.853222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:06.353 [2024-12-06 15:50:54.853233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.856094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.856122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:06.353 [2024-12-06 15:50:54.856135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:24:06.353 [2024-12-06 15:50:54.856145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.861995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.862296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:06.353 [2024-12-06 15:50:54.862321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.826 ms 00:24:06.353 [2024-12-06 15:50:54.862361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.864410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.864443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:06.353 [2024-12-06 15:50:54.864457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:24:06.353 [2024-12-06 15:50:54.864467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.869196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.869348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:06.353 [2024-12-06 15:50:54.869372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:24:06.353 [2024-12-06 15:50:54.869384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.869515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.869534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:06.353 [2024-12-06 15:50:54.869546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:06.353 [2024-12-06 15:50:54.869561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.871628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.871663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:06.353 [2024-12-06 15:50:54.871677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:24:06.353 [2024-12-06 15:50:54.871686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.873243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.873276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:06.353 [2024-12-06 15:50:54.873289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:24:06.353 [2024-12-06 15:50:54.873297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.874649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.874685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:06.353 [2024-12-06 15:50:54.874698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:24:06.353 [2024-12-06 15:50:54.874707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.875885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.353 [2024-12-06 15:50:54.875921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:06.353 [2024-12-06 15:50:54.875934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:24:06.353 [2024-12-06 15:50:54.875960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.353 [2024-12-06 15:50:54.875995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:06.353 [2024-12-06 15:50:54.876016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:06.353 [2024-12-06 15:50:54.876498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.876931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.877953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.878010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.878118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:06.354 [2024-12-06 15:50:54.878181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:06.354 [2024-12-06 15:50:54.878214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 847b1325-e058-4168-96b2-9a15170e279e 00:24:06.354 [2024-12-06 15:50:54.878339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:06.354 [2024-12-06 15:50:54.878374] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:06.354 [2024-12-06 15:50:54.878405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:06.354 [2024-12-06 15:50:54.878435] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:06.354 [2024-12-06 15:50:54.878448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:06.354 [2024-12-06 15:50:54.878459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:06.354 [2024-12-06 15:50:54.878476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:06.354 [2024-12-06 15:50:54.878484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:06.354 [2024-12-06 15:50:54.878492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:06.354 [2024-12-06 15:50:54.878503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.354 [2024-12-06 15:50:54.878513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:06.354 [2024-12-06 15:50:54.878524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:24:06.354 [2024-12-06 15:50:54.878533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.880599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.354 [2024-12-06 15:50:54.880625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:06.354 [2024-12-06 15:50:54.880637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:24:06.354 [2024-12-06 15:50:54.880647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.880790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.354 [2024-12-06 15:50:54.880806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:06.354 [2024-12-06 15:50:54.880817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:06.354 [2024-12-06 15:50:54.880827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.889365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.354 [2024-12-06 15:50:54.889405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.354 [2024-12-06 15:50:54.889419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.354 [2024-12-06 15:50:54.889444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.889522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.354 [2024-12-06 15:50:54.889538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.354 [2024-12-06 15:50:54.889550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.354 [2024-12-06 15:50:54.889560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.889611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.354 [2024-12-06 15:50:54.889626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.354 [2024-12-06 15:50:54.889637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.354 [2024-12-06 15:50:54.889646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.889675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.354 [2024-12-06 15:50:54.889687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.354 [2024-12-06 15:50:54.889707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.354 [2024-12-06 15:50:54.889716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.354 [2024-12-06 15:50:54.902189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.354 [2024-12-06 15:50:54.902243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.354 [2024-12-06 15:50:54.902259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.354 [2024-12-06 15:50:54.902276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.912246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.912560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.355 [2024-12-06 15:50:54.912585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.912597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.912657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.912674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.355 [2024-12-06 15:50:54.912686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.912696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.912734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.912765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.355 [2024-12-06 15:50:54.912776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.912787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.912890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.912913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.355 [2024-12-06 15:50:54.912934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.912989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.913039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.913055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:06.355 [2024-12-06 15:50:54.913072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.913082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.913135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.913149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.355 [2024-12-06 15:50:54.913160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.913169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.913226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.355 [2024-12-06 15:50:54.913246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.355 [2024-12-06 15:50:54.913258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.355 [2024-12-06 15:50:54.913268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.355 [2024-12-06 15:50:54.913452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.861 ms, result 0 00:24:06.613 00:24:06.613 00:24:06.613 15:50:55 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:07.179 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:07.179 15:50:55 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89429 00:24:07.179 15:50:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89429 ']' 00:24:07.179 15:50:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89429 00:24:07.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89429) - No such process 00:24:07.179 Process with pid 89429 is not found 00:24:07.179 15:50:55 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89429 is not found' 00:24:07.179 00:24:07.179 real 1m0.205s 00:24:07.179 user 1m25.244s 00:24:07.179 sys 0m7.055s 00:24:07.179 15:50:55 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.179 15:50:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:07.179 ************************************ 00:24:07.179 END TEST ftl_trim 00:24:07.179 ************************************ 00:24:07.179 15:50:55 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:07.179 15:50:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:07.179 15:50:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.179 15:50:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:07.179 ************************************ 00:24:07.179 START TEST ftl_restore 00:24:07.179 ************************************ 00:24:07.179 15:50:55 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:07.438 * Looking for test storage... 00:24:07.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:07.438 15:50:55 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:07.438 15:50:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:24:07.438 15:50:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:07.438 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.438 15:50:56 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.439 15:50:56 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.439 --rc genhtml_branch_coverage=1 00:24:07.439 --rc genhtml_function_coverage=1 00:24:07.439 --rc genhtml_legend=1 00:24:07.439 --rc geninfo_all_blocks=1 00:24:07.439 --rc geninfo_unexecuted_blocks=1 00:24:07.439 00:24:07.439 ' 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.439 --rc genhtml_branch_coverage=1 00:24:07.439 --rc genhtml_function_coverage=1 00:24:07.439 --rc genhtml_legend=1 00:24:07.439 --rc geninfo_all_blocks=1 00:24:07.439 --rc geninfo_unexecuted_blocks=1 00:24:07.439 00:24:07.439 ' 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.439 --rc genhtml_branch_coverage=1 00:24:07.439 --rc genhtml_function_coverage=1 00:24:07.439 --rc genhtml_legend=1 00:24:07.439 --rc geninfo_all_blocks=1 00:24:07.439 --rc geninfo_unexecuted_blocks=1 00:24:07.439 00:24:07.439 ' 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:07.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.439 --rc genhtml_branch_coverage=1 00:24:07.439 --rc genhtml_function_coverage=1 00:24:07.439 --rc genhtml_legend=1 00:24:07.439 --rc geninfo_all_blocks=1 00:24:07.439 --rc geninfo_unexecuted_blocks=1 00:24:07.439 00:24:07.439 ' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.BnNuGqpwgA 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89663 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89663 00:24:07.439 15:50:56 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 89663 ']' 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.439 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:07.698 [2024-12-06 15:50:56.181547] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:24:07.698 [2024-12-06 15:50:56.181731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89663 ] 00:24:07.698 [2024-12-06 15:50:56.331512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.698 [2024-12-06 15:50:56.370540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.267 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.267 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:08.267 15:50:56 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:08.526 15:50:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:08.526 15:50:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:08.526 15:50:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:08.526 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:08.526 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.526 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:08.526 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:08.526 15:50:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:08.786 { 00:24:08.786 "name": "nvme0n1", 00:24:08.786 "aliases": [ 00:24:08.786 "6b36d9e6-b02f-4368-9cdb-372aa4f9b521" 00:24:08.786 ], 00:24:08.786 "product_name": "NVMe disk", 00:24:08.786 "block_size": 4096, 00:24:08.786 "num_blocks": 1310720, 00:24:08.786 "uuid": "6b36d9e6-b02f-4368-9cdb-372aa4f9b521", 00:24:08.786 "numa_id": -1, 00:24:08.786 "assigned_rate_limits": { 00:24:08.786 "rw_ios_per_sec": 0, 00:24:08.786 "rw_mbytes_per_sec": 0, 00:24:08.786 "r_mbytes_per_sec": 0, 00:24:08.786 "w_mbytes_per_sec": 0 00:24:08.786 }, 00:24:08.786 "claimed": true, 00:24:08.786 "claim_type": "read_many_write_one", 00:24:08.786 "zoned": false, 00:24:08.786 "supported_io_types": { 00:24:08.786 "read": true, 00:24:08.786 "write": true, 00:24:08.786 "unmap": true, 00:24:08.786 "flush": true, 00:24:08.786 "reset": true, 00:24:08.786 "nvme_admin": true, 00:24:08.786 "nvme_io": true, 00:24:08.786 "nvme_io_md": false, 00:24:08.786 "write_zeroes": true, 00:24:08.786 "zcopy": false, 00:24:08.786 "get_zone_info": false, 00:24:08.786 "zone_management": false, 00:24:08.786 "zone_append": false, 00:24:08.786 "compare": true, 00:24:08.786 "compare_and_write": false, 00:24:08.786 "abort": true, 00:24:08.786 "seek_hole": false, 00:24:08.786 "seek_data": false, 00:24:08.786 "copy": true, 00:24:08.786 "nvme_iov_md": false 00:24:08.786 }, 00:24:08.786 "driver_specific": { 00:24:08.786 "nvme": [ 00:24:08.786 { 00:24:08.786 "pci_address": "0000:00:11.0", 00:24:08.786 "trid": { 00:24:08.786 "trtype": "PCIe", 00:24:08.786 "traddr": "0000:00:11.0" 00:24:08.786 }, 00:24:08.786 "ctrlr_data": { 00:24:08.786 "cntlid": 0, 00:24:08.786 "vendor_id": "0x1b36", 00:24:08.786 "model_number": "QEMU NVMe Ctrl", 00:24:08.786 "serial_number": "12341", 00:24:08.786 "firmware_revision": "8.0.0", 00:24:08.786 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:08.786 "oacs": { 00:24:08.786 "security": 0, 00:24:08.786 "format": 1, 00:24:08.786 "firmware": 0, 00:24:08.786 "ns_manage": 1 00:24:08.786 }, 00:24:08.786 "multi_ctrlr": false, 00:24:08.786 "ana_reporting": false 00:24:08.786 }, 00:24:08.786 "vs": { 00:24:08.786 "nvme_version": "1.4" 00:24:08.786 }, 00:24:08.786 "ns_data": { 00:24:08.786 "id": 1, 00:24:08.786 "can_share": false 00:24:08.786 } 00:24:08.786 } 00:24:08.786 ], 00:24:08.786 "mp_policy": "active_passive" 00:24:08.786 } 00:24:08.786 } 00:24:08.786 ]' 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:08.786 15:50:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:08.786 15:50:57 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:08.786 15:50:57 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:08.786 15:50:57 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:08.786 15:50:57 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:08.786 15:50:57 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:09.045 15:50:57 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=53fd3d8d-9a85-4805-b1d2-fc12dcce8123 00:24:09.045 15:50:57 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:09.045 15:50:57 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53fd3d8d-9a85-4805-b1d2-fc12dcce8123 00:24:09.303 15:50:57 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:09.561 15:50:58 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ba0c5666-8370-4f45-974a-c93ec0126e0f 00:24:09.562 15:50:58 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ba0c5666-8370-4f45-974a-c93ec0126e0f 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:09.820 15:50:58 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:09.820 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:09.820 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:09.820 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:09.820 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:09.820 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:10.079 { 00:24:10.079 "name": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:10.079 "aliases": [ 00:24:10.079 "lvs/nvme0n1p0" 00:24:10.079 ], 00:24:10.079 "product_name": "Logical Volume", 00:24:10.079 "block_size": 4096, 00:24:10.079 "num_blocks": 26476544, 00:24:10.079 "uuid": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:10.079 "assigned_rate_limits": { 00:24:10.079 "rw_ios_per_sec": 0, 00:24:10.079 "rw_mbytes_per_sec": 0, 00:24:10.079 "r_mbytes_per_sec": 0, 00:24:10.079 "w_mbytes_per_sec": 0 00:24:10.079 }, 00:24:10.079 "claimed": false, 00:24:10.079 "zoned": false, 00:24:10.079 "supported_io_types": { 00:24:10.079 "read": true, 00:24:10.079 "write": true, 00:24:10.079 "unmap": true, 00:24:10.079 "flush": false, 00:24:10.079 "reset": true, 00:24:10.079 "nvme_admin": false, 00:24:10.079 "nvme_io": false, 00:24:10.079 "nvme_io_md": false, 00:24:10.079 "write_zeroes": true, 00:24:10.079 "zcopy": false, 00:24:10.079 "get_zone_info": false, 00:24:10.079 "zone_management": false, 00:24:10.079 "zone_append": false, 00:24:10.079 "compare": false, 00:24:10.079 "compare_and_write": false, 00:24:10.079 "abort": false, 00:24:10.079 "seek_hole": true, 00:24:10.079 "seek_data": true, 00:24:10.079 "copy": false, 00:24:10.079 "nvme_iov_md": false 00:24:10.079 }, 00:24:10.079 "driver_specific": { 00:24:10.079 "lvol": { 00:24:10.079 "lvol_store_uuid": "ba0c5666-8370-4f45-974a-c93ec0126e0f", 00:24:10.079 "base_bdev": "nvme0n1", 00:24:10.079 "thin_provision": true, 00:24:10.079 "num_allocated_clusters": 0, 00:24:10.079 "snapshot": false, 00:24:10.079 "clone": false, 00:24:10.079 "esnap_clone": false 00:24:10.079 } 00:24:10.079 } 00:24:10.079 } 00:24:10.079 ]' 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:10.079 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:10.079 15:50:58 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:10.079 15:50:58 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:10.079 15:50:58 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:10.337 15:50:58 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:10.337 15:50:58 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:10.337 15:50:58 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:10.337 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:10.337 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:10.337 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:10.337 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:10.337 15:50:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:10.595 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:10.595 { 00:24:10.595 "name": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:10.595 "aliases": [ 00:24:10.595 "lvs/nvme0n1p0" 00:24:10.595 ], 00:24:10.595 "product_name": "Logical Volume", 00:24:10.595 "block_size": 4096, 00:24:10.595 "num_blocks": 26476544, 00:24:10.595 "uuid": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:10.595 "assigned_rate_limits": { 00:24:10.595 "rw_ios_per_sec": 0, 00:24:10.595 "rw_mbytes_per_sec": 0, 00:24:10.595 "r_mbytes_per_sec": 0, 00:24:10.595 "w_mbytes_per_sec": 0 00:24:10.595 }, 00:24:10.595 "claimed": false, 00:24:10.595 "zoned": false, 00:24:10.595 "supported_io_types": { 00:24:10.595 "read": true, 00:24:10.595 "write": true, 00:24:10.595 "unmap": true, 00:24:10.595 "flush": false, 00:24:10.595 "reset": true, 00:24:10.595 "nvme_admin": false, 00:24:10.595 "nvme_io": false, 00:24:10.595 "nvme_io_md": false, 00:24:10.595 "write_zeroes": true, 00:24:10.595 "zcopy": false, 00:24:10.595 "get_zone_info": false, 00:24:10.595 "zone_management": false, 00:24:10.595 "zone_append": false, 00:24:10.595 "compare": false, 00:24:10.595 "compare_and_write": false, 00:24:10.595 "abort": false, 00:24:10.595 "seek_hole": true, 00:24:10.595 "seek_data": true, 00:24:10.595 "copy": false, 00:24:10.595 "nvme_iov_md": false 00:24:10.595 }, 00:24:10.595 "driver_specific": { 00:24:10.595 "lvol": { 00:24:10.595 "lvol_store_uuid": "ba0c5666-8370-4f45-974a-c93ec0126e0f", 00:24:10.595 "base_bdev": "nvme0n1", 00:24:10.595 "thin_provision": true, 00:24:10.595 "num_allocated_clusters": 0, 00:24:10.595 "snapshot": false, 00:24:10.595 "clone": false, 00:24:10.595 "esnap_clone": false 00:24:10.595 } 00:24:10.595 } 00:24:10.595 } 00:24:10.595 ]' 00:24:10.595 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:10.853 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:10.853 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:10.853 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:10.853 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:10.853 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:10.853 15:50:59 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:10.853 15:50:59 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:11.111 15:50:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:11.111 15:50:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:11.111 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:11.111 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:11.111 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:11.111 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:11.111 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b acc1bd8a-ebb5-463d-8b46-cb66732abfba 00:24:11.368 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:11.368 { 00:24:11.368 "name": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:11.368 "aliases": [ 00:24:11.368 "lvs/nvme0n1p0" 00:24:11.368 ], 00:24:11.368 "product_name": "Logical Volume", 00:24:11.368 "block_size": 4096, 00:24:11.368 "num_blocks": 26476544, 00:24:11.368 "uuid": "acc1bd8a-ebb5-463d-8b46-cb66732abfba", 00:24:11.368 "assigned_rate_limits": { 00:24:11.368 "rw_ios_per_sec": 0, 00:24:11.368 "rw_mbytes_per_sec": 0, 00:24:11.368 "r_mbytes_per_sec": 0, 00:24:11.368 "w_mbytes_per_sec": 0 00:24:11.368 }, 00:24:11.368 "claimed": false, 00:24:11.368 "zoned": false, 00:24:11.368 "supported_io_types": { 00:24:11.368 "read": true, 00:24:11.368 "write": true, 00:24:11.368 "unmap": true, 00:24:11.368 "flush": false, 00:24:11.368 "reset": true, 00:24:11.368 "nvme_admin": false, 00:24:11.369 "nvme_io": false, 00:24:11.369 "nvme_io_md": false, 00:24:11.369 "write_zeroes": true, 00:24:11.369 "zcopy": false, 00:24:11.369 "get_zone_info": false, 00:24:11.369 "zone_management": false, 00:24:11.369 "zone_append": false, 00:24:11.369 "compare": false, 00:24:11.369 "compare_and_write": false, 00:24:11.369 "abort": false, 00:24:11.369 "seek_hole": true, 00:24:11.369 "seek_data": true, 00:24:11.369 "copy": false, 00:24:11.369 "nvme_iov_md": false 00:24:11.369 }, 00:24:11.369 "driver_specific": { 00:24:11.369 "lvol": { 00:24:11.369 "lvol_store_uuid": "ba0c5666-8370-4f45-974a-c93ec0126e0f", 00:24:11.369 "base_bdev": "nvme0n1", 00:24:11.369 "thin_provision": true, 00:24:11.369 "num_allocated_clusters": 0, 00:24:11.369 "snapshot": false, 00:24:11.369 "clone": false, 00:24:11.369 "esnap_clone": false 00:24:11.369 } 00:24:11.369 } 00:24:11.369 } 00:24:11.369 ]' 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:11.369 15:50:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d acc1bd8a-ebb5-463d-8b46-cb66732abfba --l2p_dram_limit 10' 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:11.369 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:11.369 15:50:59 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d acc1bd8a-ebb5-463d-8b46-cb66732abfba --l2p_dram_limit 10 -c nvc0n1p0 00:24:11.628 [2024-12-06 15:51:00.167313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.167382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.628 [2024-12-06 15:51:00.167401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.628 [2024-12-06 15:51:00.167415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.167486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.167506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.628 [2024-12-06 15:51:00.167520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:11.628 [2024-12-06 15:51:00.167535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.167560] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.628 [2024-12-06 15:51:00.167882] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.628 [2024-12-06 15:51:00.167919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.167949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.628 [2024-12-06 15:51:00.167971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:24:11.628 [2024-12-06 15:51:00.167985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.168164] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:24:11.628 [2024-12-06 15:51:00.169951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.169981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:11.628 [2024-12-06 15:51:00.170004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:11.628 [2024-12-06 15:51:00.170014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.180697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.180747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.628 [2024-12-06 15:51:00.180768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.620 ms 00:24:11.628 [2024-12-06 15:51:00.180784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.180934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.180949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.628 [2024-12-06 15:51:00.180970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:11.628 [2024-12-06 15:51:00.180991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.181082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.181107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.628 [2024-12-06 15:51:00.181122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:11.628 [2024-12-06 15:51:00.181132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.181166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.628 [2024-12-06 15:51:00.183551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.183583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.628 [2024-12-06 15:51:00.183596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:24:11.628 [2024-12-06 15:51:00.183608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.183649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.183665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.628 [2024-12-06 15:51:00.183677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:11.628 [2024-12-06 15:51:00.183693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.183715] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:11.628 [2024-12-06 15:51:00.183862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.628 [2024-12-06 15:51:00.183881] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.628 [2024-12-06 15:51:00.183898] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.628 [2024-12-06 15:51:00.183911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.628 [2024-12-06 15:51:00.183932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.628 [2024-12-06 15:51:00.183958] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.628 [2024-12-06 15:51:00.183975] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.628 [2024-12-06 15:51:00.183985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.628 [2024-12-06 15:51:00.183997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.628 [2024-12-06 15:51:00.184016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.184030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.628 [2024-12-06 15:51:00.184040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:24:11.628 [2024-12-06 15:51:00.184060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.184141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.628 [2024-12-06 15:51:00.184161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.628 [2024-12-06 15:51:00.184171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:11.628 [2024-12-06 15:51:00.184183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.628 [2024-12-06 15:51:00.184275] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.628 [2024-12-06 15:51:00.184301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.628 [2024-12-06 15:51:00.184313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.628 [2024-12-06 15:51:00.184350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.628 [2024-12-06 15:51:00.184381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.628 [2024-12-06 15:51:00.184401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.628 [2024-12-06 15:51:00.184412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.628 [2024-12-06 15:51:00.184421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.628 [2024-12-06 15:51:00.184435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.628 [2024-12-06 15:51:00.184444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.628 [2024-12-06 15:51:00.184455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.628 [2024-12-06 15:51:00.184478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.628 [2024-12-06 15:51:00.184543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.628 [2024-12-06 15:51:00.184764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.628 [2024-12-06 15:51:00.184839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.628 [2024-12-06 15:51:00.184852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.628 [2024-12-06 15:51:00.184861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.629 [2024-12-06 15:51:00.184877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.629 [2024-12-06 15:51:00.184886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.629 [2024-12-06 15:51:00.184897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.629 [2024-12-06 15:51:00.184906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.629 [2024-12-06 15:51:00.184917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.629 [2024-12-06 15:51:00.184926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.629 [2024-12-06 15:51:00.184937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.629 [2024-12-06 15:51:00.184946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.629 [2024-12-06 15:51:00.184957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.629 [2024-12-06 15:51:00.184979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.629 [2024-12-06 15:51:00.184994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.629 [2024-12-06 15:51:00.185004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.629 [2024-12-06 15:51:00.185015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.629 [2024-12-06 15:51:00.185023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.629 [2024-12-06 15:51:00.185034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.629 [2024-12-06 15:51:00.185054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.629 [2024-12-06 15:51:00.185071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.629 [2024-12-06 15:51:00.185081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.629 [2024-12-06 15:51:00.185094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.629 [2024-12-06 15:51:00.185103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.629 [2024-12-06 15:51:00.185115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.629 [2024-12-06 15:51:00.185125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.629 [2024-12-06 15:51:00.185136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.629 [2024-12-06 15:51:00.185146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.629 [2024-12-06 15:51:00.185160] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.629 [2024-12-06 15:51:00.185177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.629 [2024-12-06 15:51:00.185201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.629 [2024-12-06 15:51:00.185214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.629 [2024-12-06 15:51:00.185223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.629 [2024-12-06 15:51:00.185235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.629 [2024-12-06 15:51:00.185245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.629 [2024-12-06 15:51:00.185259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.629 [2024-12-06 15:51:00.185268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.629 [2024-12-06 15:51:00.185280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.629 [2024-12-06 15:51:00.185290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.629 [2024-12-06 15:51:00.185346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.629 [2024-12-06 15:51:00.185356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.629 [2024-12-06 15:51:00.185380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.629 [2024-12-06 15:51:00.185392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.629 [2024-12-06 15:51:00.185401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.629 [2024-12-06 15:51:00.185415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.629 [2024-12-06 15:51:00.185425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.629 [2024-12-06 15:51:00.185440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:24:11.629 [2024-12-06 15:51:00.185449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.629 [2024-12-06 15:51:00.185525] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:11.629 [2024-12-06 15:51:00.185544] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:14.169 [2024-12-06 15:51:02.841323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.170 [2024-12-06 15:51:02.841389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:14.170 [2024-12-06 15:51:02.841410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2655.808 ms 00:24:14.170 [2024-12-06 15:51:02.841422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.170 [2024-12-06 15:51:02.858059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.170 [2024-12-06 15:51:02.858114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.170 [2024-12-06 15:51:02.858135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.517 ms 00:24:14.170 [2024-12-06 15:51:02.858146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.170 [2024-12-06 15:51:02.858266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.170 [2024-12-06 15:51:02.858281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.170 [2024-12-06 15:51:02.858296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:14.170 [2024-12-06 15:51:02.858305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.443 [2024-12-06 15:51:02.873439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.443 [2024-12-06 15:51:02.873486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.443 [2024-12-06 15:51:02.873504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.056 ms 00:24:14.443 [2024-12-06 15:51:02.873515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.443 [2024-12-06 15:51:02.873572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.443 [2024-12-06 15:51:02.873587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.443 [2024-12-06 15:51:02.873601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:14.444 [2024-12-06 15:51:02.873611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.874373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.874401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.444 [2024-12-06 15:51:02.874417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:24:14.444 [2024-12-06 15:51:02.874427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.874582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.874602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.444 [2024-12-06 15:51:02.874625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:24:14.444 [2024-12-06 15:51:02.874635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.884957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.885002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.444 [2024-12-06 15:51:02.885021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.293 ms 00:24:14.444 [2024-12-06 15:51:02.885033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.902668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:14.444 [2024-12-06 15:51:02.906725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.906762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.444 [2024-12-06 15:51:02.906786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.583 ms 00:24:14.444 [2024-12-06 15:51:02.906800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.969911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.969967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:14.444 [2024-12-06 15:51:02.969995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.071 ms 00:24:14.444 [2024-12-06 15:51:02.970013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.970223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.970245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.444 [2024-12-06 15:51:02.970257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:24:14.444 [2024-12-06 15:51:02.970269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.973905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.973958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:14.444 [2024-12-06 15:51:02.973975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:24:14.444 [2024-12-06 15:51:02.973992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.976857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.977139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:14.444 [2024-12-06 15:51:02.977164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:24:14.444 [2024-12-06 15:51:02.977178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:02.977511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:02.977532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.444 [2024-12-06 15:51:02.977545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:24:14.444 [2024-12-06 15:51:02.977559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.010425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.010477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:14.444 [2024-12-06 15:51:03.010496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.827 ms 00:24:14.444 [2024-12-06 15:51:03.010509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.015797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.015841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:14.444 [2024-12-06 15:51:03.015857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:24:14.444 [2024-12-06 15:51:03.015870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.019192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.019233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:14.444 [2024-12-06 15:51:03.019247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:24:14.444 [2024-12-06 15:51:03.019258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.022952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.022990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.444 [2024-12-06 15:51:03.023004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:24:14.444 [2024-12-06 15:51:03.023019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.023064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.023084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.444 [2024-12-06 15:51:03.023096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:14.444 [2024-12-06 15:51:03.023109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.023184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.444 [2024-12-06 15:51:03.023201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.444 [2024-12-06 15:51:03.023220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:14.444 [2024-12-06 15:51:03.023241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.444 [2024-12-06 15:51:03.024585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2856.786 ms, result 0 00:24:14.444 { 00:24:14.444 "name": "ftl0", 00:24:14.444 "uuid": "b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69" 00:24:14.444 } 00:24:14.444 15:51:03 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:14.444 15:51:03 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:14.725 15:51:03 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:14.725 15:51:03 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:14.998 [2024-12-06 15:51:03.654795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.998 [2024-12-06 15:51:03.654980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:14.998 [2024-12-06 15:51:03.655102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:14.998 [2024-12-06 15:51:03.655124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.998 [2024-12-06 15:51:03.655166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:14.998 [2024-12-06 15:51:03.656176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.998 [2024-12-06 15:51:03.656215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:14.998 [2024-12-06 15:51:03.656229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:24:14.998 [2024-12-06 15:51:03.656242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.998 [2024-12-06 15:51:03.656470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.656517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:14.999 [2024-12-06 15:51:03.656531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:24:14.999 [2024-12-06 15:51:03.656547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.659321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.659464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:14.999 [2024-12-06 15:51:03.659572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:24:14.999 [2024-12-06 15:51:03.659628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.665010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.665187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:14.999 [2024-12-06 15:51:03.665294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:24:14.999 [2024-12-06 15:51:03.665349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.666764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.666947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:14.999 [2024-12-06 15:51:03.667063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:24:14.999 [2024-12-06 15:51:03.667112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.672265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.672307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:14.999 [2024-12-06 15:51:03.672321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:24:14.999 [2024-12-06 15:51:03.672343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.672454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.672473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:14.999 [2024-12-06 15:51:03.672514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:14.999 [2024-12-06 15:51:03.672530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.674568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.674606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:14.999 [2024-12-06 15:51:03.674619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:24:14.999 [2024-12-06 15:51:03.674631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.676144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.676186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:14.999 [2024-12-06 15:51:03.676209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:24:14.999 [2024-12-06 15:51:03.676221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.677435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.677473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:14.999 [2024-12-06 15:51:03.677486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:24:14.999 [2024-12-06 15:51:03.677499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.678553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.999 [2024-12-06 15:51:03.678592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.999 [2024-12-06 15:51:03.678605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:24:14.999 [2024-12-06 15:51:03.678616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.999 [2024-12-06 15:51:03.678649] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.999 [2024-12-06 15:51:03.678672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.678989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.999 [2024-12-06 15:51:03.679126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:15.000 [2024-12-06 15:51:03.679866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:15.000 [2024-12-06 15:51:03.679876] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:24:15.000 [2024-12-06 15:51:03.679898] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:15.000 [2024-12-06 15:51:03.679908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:15.000 [2024-12-06 15:51:03.679919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:15.000 [2024-12-06 15:51:03.679929] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:15.000 [2024-12-06 15:51:03.680278] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:15.000 [2024-12-06 15:51:03.680331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:15.000 [2024-12-06 15:51:03.680368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:15.000 [2024-12-06 15:51:03.680400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:15.000 [2024-12-06 15:51:03.680634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:15.001 [2024-12-06 15:51:03.680679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.001 [2024-12-06 15:51:03.680717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:15.001 [2024-12-06 15:51:03.680752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.031 ms 00:24:15.001 [2024-12-06 15:51:03.680915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.001 [2024-12-06 15:51:03.683202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.001 [2024-12-06 15:51:03.683332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:15.001 [2024-12-06 15:51:03.683443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:24:15.001 [2024-12-06 15:51:03.683570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.001 [2024-12-06 15:51:03.683788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.001 [2024-12-06 15:51:03.683856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:15.001 [2024-12-06 15:51:03.684026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:15.001 [2024-12-06 15:51:03.684080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.692686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.692854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.260 [2024-12-06 15:51:03.692981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.693008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.693073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.693093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.260 [2024-12-06 15:51:03.693104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.693117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.693218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.693259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.260 [2024-12-06 15:51:03.693271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.693286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.693309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.693324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.260 [2024-12-06 15:51:03.693335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.693347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.708198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.708259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.260 [2024-12-06 15:51:03.708275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.708290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.720648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.720711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.260 [2024-12-06 15:51:03.720729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.720742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.720867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.720892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.260 [2024-12-06 15:51:03.720904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.720958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.721029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.721049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.260 [2024-12-06 15:51:03.721061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.721073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.721171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.721191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.260 [2024-12-06 15:51:03.721203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.721215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.721267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.721290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:15.260 [2024-12-06 15:51:03.721301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.721313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.721366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.721394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.260 [2024-12-06 15:51:03.721406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.260 [2024-12-06 15:51:03.721418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.260 [2024-12-06 15:51:03.721490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:15.260 [2024-12-06 15:51:03.721510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.260 [2024-12-06 15:51:03.721522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:15.261 [2024-12-06 15:51:03.721535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.261 [2024-12-06 15:51:03.721706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.861 ms, result 0 00:24:15.261 true 00:24:15.261 15:51:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89663 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89663 ']' 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89663 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89663 00:24:15.261 killing process with pid 89663 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89663' 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 89663 00:24:15.261 15:51:03 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 89663 00:24:18.570 15:51:06 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:22.757 262144+0 records in 00:24:22.757 262144+0 records out 00:24:22.757 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.05905 s, 265 MB/s 00:24:22.757 15:51:10 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:24.130 15:51:12 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:24.130 [2024-12-06 15:51:12.771132] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:24:24.130 [2024-12-06 15:51:12.771316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89862 ] 00:24:24.387 [2024-12-06 15:51:12.934204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.387 [2024-12-06 15:51:12.980284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.647 [2024-12-06 15:51:13.125164] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.647 [2024-12-06 15:51:13.125251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:24.647 [2024-12-06 15:51:13.283238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.283280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:24.647 [2024-12-06 15:51:13.283299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:24.647 [2024-12-06 15:51:13.283310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.283366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.283383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.647 [2024-12-06 15:51:13.283394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:24.647 [2024-12-06 15:51:13.283421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.283459] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:24.647 [2024-12-06 15:51:13.283694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:24.647 [2024-12-06 15:51:13.283723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.283747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.647 [2024-12-06 15:51:13.283761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:24.647 [2024-12-06 15:51:13.283771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.285670] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:24.647 [2024-12-06 15:51:13.288405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.288444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:24.647 [2024-12-06 15:51:13.288458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:24:24.647 [2024-12-06 15:51:13.288476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.288627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.288646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:24.647 [2024-12-06 15:51:13.288661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:24.647 [2024-12-06 15:51:13.288680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.297150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.297183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.647 [2024-12-06 15:51:13.297214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.406 ms 00:24:24.647 [2024-12-06 15:51:13.297236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.297344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.297366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.647 [2024-12-06 15:51:13.297379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:24.647 [2024-12-06 15:51:13.297389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.297453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.297469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:24.647 [2024-12-06 15:51:13.297481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:24.647 [2024-12-06 15:51:13.297498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.297542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.647 [2024-12-06 15:51:13.299578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.299606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.647 [2024-12-06 15:51:13.299619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:24:24.647 [2024-12-06 15:51:13.299629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.299668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.299683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:24.647 [2024-12-06 15:51:13.299694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:24.647 [2024-12-06 15:51:13.299709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.299750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:24.647 [2024-12-06 15:51:13.299780] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:24.647 [2024-12-06 15:51:13.299826] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:24.647 [2024-12-06 15:51:13.299848] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:24.647 [2024-12-06 15:51:13.299964] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:24.647 [2024-12-06 15:51:13.299981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:24.647 [2024-12-06 15:51:13.300001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:24.647 [2024-12-06 15:51:13.300015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:24.647 [2024-12-06 15:51:13.300029] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:24.647 [2024-12-06 15:51:13.300041] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:24.647 [2024-12-06 15:51:13.300052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:24.647 [2024-12-06 15:51:13.300061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:24.647 [2024-12-06 15:51:13.300071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:24.647 [2024-12-06 15:51:13.300093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.300105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:24.647 [2024-12-06 15:51:13.300116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:24:24.647 [2024-12-06 15:51:13.300130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.300210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.647 [2024-12-06 15:51:13.300224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:24.647 [2024-12-06 15:51:13.300235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:24.647 [2024-12-06 15:51:13.300244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.647 [2024-12-06 15:51:13.300350] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:24.647 [2024-12-06 15:51:13.300369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:24.647 [2024-12-06 15:51:13.300380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.647 [2024-12-06 15:51:13.300391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.647 [2024-12-06 15:51:13.300412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:24.647 [2024-12-06 15:51:13.300422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:24.647 [2024-12-06 15:51:13.300432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:24.647 [2024-12-06 15:51:13.300443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:24.647 [2024-12-06 15:51:13.300454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:24.647 [2024-12-06 15:51:13.300464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.648 [2024-12-06 15:51:13.300474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:24.648 [2024-12-06 15:51:13.300489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:24.648 [2024-12-06 15:51:13.300513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:24.648 [2024-12-06 15:51:13.300525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:24.648 [2024-12-06 15:51:13.300536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:24.648 [2024-12-06 15:51:13.300545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:24.648 [2024-12-06 15:51:13.300564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:24.648 [2024-12-06 15:51:13.300592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:24.648 [2024-12-06 15:51:13.300621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:24.648 [2024-12-06 15:51:13.300650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:24.648 [2024-12-06 15:51:13.300687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:24.648 [2024-12-06 15:51:13.300715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.648 [2024-12-06 15:51:13.300734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:24.648 [2024-12-06 15:51:13.300744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:24.648 [2024-12-06 15:51:13.300754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:24.648 [2024-12-06 15:51:13.300764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:24.648 [2024-12-06 15:51:13.300773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:24.648 [2024-12-06 15:51:13.300783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:24.648 [2024-12-06 15:51:13.300802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:24.648 [2024-12-06 15:51:13.300813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300825] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:24.648 [2024-12-06 15:51:13.300839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:24.648 [2024-12-06 15:51:13.300849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:24.648 [2024-12-06 15:51:13.300869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:24.648 [2024-12-06 15:51:13.300879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:24.648 [2024-12-06 15:51:13.300889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:24.648 [2024-12-06 15:51:13.300899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:24.648 [2024-12-06 15:51:13.300908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:24.648 [2024-12-06 15:51:13.300918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:24.648 [2024-12-06 15:51:13.300929] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:24.648 [2024-12-06 15:51:13.300954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.300967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:24.648 [2024-12-06 15:51:13.300978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:24.648 [2024-12-06 15:51:13.300989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:24.648 [2024-12-06 15:51:13.300999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:24.648 [2024-12-06 15:51:13.301013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:24.648 [2024-12-06 15:51:13.301024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:24.648 [2024-12-06 15:51:13.301034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:24.648 [2024-12-06 15:51:13.301044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:24.648 [2024-12-06 15:51:13.301054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:24.648 [2024-12-06 15:51:13.301075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:24.648 [2024-12-06 15:51:13.301126] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:24.648 [2024-12-06 15:51:13.301137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:24.648 [2024-12-06 15:51:13.301159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:24.648 [2024-12-06 15:51:13.301169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:24.648 [2024-12-06 15:51:13.301179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:24.648 [2024-12-06 15:51:13.301193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.648 [2024-12-06 15:51:13.301204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:24.648 [2024-12-06 15:51:13.301215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:24:24.648 [2024-12-06 15:51:13.301224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.648 [2024-12-06 15:51:13.318876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.648 [2024-12-06 15:51:13.318926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.648 [2024-12-06 15:51:13.318953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.562 ms 00:24:24.648 [2024-12-06 15:51:13.318967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.648 [2024-12-06 15:51:13.319069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.648 [2024-12-06 15:51:13.319084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:24.648 [2024-12-06 15:51:13.319095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:24.648 [2024-12-06 15:51:13.319107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.343043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.343084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.907 [2024-12-06 15:51:13.343102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.858 ms 00:24:24.907 [2024-12-06 15:51:13.343113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.343176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.343194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.907 [2024-12-06 15:51:13.343206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:24.907 [2024-12-06 15:51:13.343216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.343818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.343841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.907 [2024-12-06 15:51:13.343854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:24:24.907 [2024-12-06 15:51:13.343864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.344037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.344056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.907 [2024-12-06 15:51:13.344068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:24.907 [2024-12-06 15:51:13.344078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.352915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.352956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.907 [2024-12-06 15:51:13.352971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.811 ms 00:24:24.907 [2024-12-06 15:51:13.352982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.355888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:24.907 [2024-12-06 15:51:13.355922] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:24.907 [2024-12-06 15:51:13.355949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.355963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:24.907 [2024-12-06 15:51:13.355974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:24:24.907 [2024-12-06 15:51:13.355984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.373202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.373248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:24.907 [2024-12-06 15:51:13.373263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.171 ms 00:24:24.907 [2024-12-06 15:51:13.373274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.375038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.375068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:24.907 [2024-12-06 15:51:13.375082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:24:24.907 [2024-12-06 15:51:13.375093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.376597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.376627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:24.907 [2024-12-06 15:51:13.376640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:24:24.907 [2024-12-06 15:51:13.376650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.376976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.377009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:24.907 [2024-12-06 15:51:13.377022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:24:24.907 [2024-12-06 15:51:13.377032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.401960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.402023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:24.907 [2024-12-06 15:51:13.402040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.898 ms 00:24:24.907 [2024-12-06 15:51:13.402052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.408467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:24.907 [2024-12-06 15:51:13.410509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.410536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:24.907 [2024-12-06 15:51:13.410560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.411 ms 00:24:24.907 [2024-12-06 15:51:13.410572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.410628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.410646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:24.907 [2024-12-06 15:51:13.410658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:24.907 [2024-12-06 15:51:13.410682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.410783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.410799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:24.907 [2024-12-06 15:51:13.410811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:24.907 [2024-12-06 15:51:13.410827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.410858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.410873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:24.907 [2024-12-06 15:51:13.410883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:24.907 [2024-12-06 15:51:13.410893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.410953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:24.907 [2024-12-06 15:51:13.410971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.410981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:24.907 [2024-12-06 15:51:13.410993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:24.907 [2024-12-06 15:51:13.411014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.414615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.414646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:24.907 [2024-12-06 15:51:13.414661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.572 ms 00:24:24.907 [2024-12-06 15:51:13.414672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.414746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.907 [2024-12-06 15:51:13.414763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:24.907 [2024-12-06 15:51:13.414774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:24.907 [2024-12-06 15:51:13.414791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.907 [2024-12-06 15:51:13.416093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.339 ms, result 0 00:24:25.841  [2024-12-06T15:51:15.470Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T15:51:16.847Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-06T15:51:17.783Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-06T15:51:18.719Z] Copying: 103/1024 [MB] (26 MBps) [2024-12-06T15:51:19.655Z] Copying: 129/1024 [MB] (25 MBps) [2024-12-06T15:51:20.591Z] Copying: 153/1024 [MB] (24 MBps) [2024-12-06T15:51:21.528Z] Copying: 178/1024 [MB] (25 MBps) [2024-12-06T15:51:22.463Z] Copying: 203/1024 [MB] (24 MBps) [2024-12-06T15:51:23.852Z] Copying: 229/1024 [MB] (25 MBps) [2024-12-06T15:51:24.787Z] Copying: 254/1024 [MB] (24 MBps) [2024-12-06T15:51:25.721Z] Copying: 277/1024 [MB] (23 MBps) [2024-12-06T15:51:26.668Z] Copying: 300/1024 [MB] (22 MBps) [2024-12-06T15:51:27.605Z] Copying: 323/1024 [MB] (23 MBps) [2024-12-06T15:51:28.541Z] Copying: 345/1024 [MB] (21 MBps) [2024-12-06T15:51:29.477Z] Copying: 368/1024 [MB] (22 MBps) [2024-12-06T15:51:30.854Z] Copying: 390/1024 [MB] (22 MBps) [2024-12-06T15:51:31.788Z] Copying: 413/1024 [MB] (22 MBps) [2024-12-06T15:51:32.725Z] Copying: 435/1024 [MB] (22 MBps) [2024-12-06T15:51:33.772Z] Copying: 458/1024 [MB] (22 MBps) [2024-12-06T15:51:34.705Z] Copying: 481/1024 [MB] (22 MBps) [2024-12-06T15:51:35.653Z] Copying: 504/1024 [MB] (22 MBps) [2024-12-06T15:51:36.587Z] Copying: 526/1024 [MB] (22 MBps) [2024-12-06T15:51:37.522Z] Copying: 550/1024 [MB] (23 MBps) [2024-12-06T15:51:38.456Z] Copying: 573/1024 [MB] (23 MBps) [2024-12-06T15:51:39.825Z] Copying: 595/1024 [MB] (22 MBps) [2024-12-06T15:51:40.760Z] Copying: 618/1024 [MB] (22 MBps) [2024-12-06T15:51:41.698Z] Copying: 641/1024 [MB] (22 MBps) [2024-12-06T15:51:42.635Z] Copying: 664/1024 [MB] (22 MBps) [2024-12-06T15:51:43.573Z] Copying: 686/1024 [MB] (22 MBps) [2024-12-06T15:51:44.511Z] Copying: 709/1024 [MB] (22 MBps) [2024-12-06T15:51:45.447Z] Copying: 732/1024 [MB] (23 MBps) [2024-12-06T15:51:46.822Z] Copying: 756/1024 [MB] (23 MBps) [2024-12-06T15:51:47.755Z] Copying: 779/1024 [MB] (23 MBps) [2024-12-06T15:51:48.689Z] Copying: 803/1024 [MB] (23 MBps) [2024-12-06T15:51:49.627Z] Copying: 826/1024 [MB] (23 MBps) [2024-12-06T15:51:50.566Z] Copying: 850/1024 [MB] (23 MBps) [2024-12-06T15:51:51.504Z] Copying: 874/1024 [MB] (23 MBps) [2024-12-06T15:51:52.441Z] Copying: 897/1024 [MB] (23 MBps) [2024-12-06T15:51:53.818Z] Copying: 920/1024 [MB] (23 MBps) [2024-12-06T15:51:54.754Z] Copying: 943/1024 [MB] (23 MBps) [2024-12-06T15:51:55.689Z] Copying: 965/1024 [MB] (22 MBps) [2024-12-06T15:51:56.624Z] Copying: 988/1024 [MB] (22 MBps) [2024-12-06T15:51:57.194Z] Copying: 1012/1024 [MB] (23 MBps) [2024-12-06T15:51:57.194Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 15:51:56.935844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.935897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:08.501 [2024-12-06 15:51:56.935933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:08.501 [2024-12-06 15:51:56.935961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.936011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.501 [2024-12-06 15:51:56.937240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.937266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:08.501 [2024-12-06 15:51:56.937280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:25:08.501 [2024-12-06 15:51:56.937291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.939126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.939162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:08.501 [2024-12-06 15:51:56.939176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:25:08.501 [2024-12-06 15:51:56.939187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.954966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.955001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:08.501 [2024-12-06 15:51:56.955016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.753 ms 00:25:08.501 [2024-12-06 15:51:56.955027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.960099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.960127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:08.501 [2024-12-06 15:51:56.960140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:25:08.501 [2024-12-06 15:51:56.960150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.961521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.961555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:08.501 [2024-12-06 15:51:56.961567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:25:08.501 [2024-12-06 15:51:56.961577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.965300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.965333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:08.501 [2024-12-06 15:51:56.965346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:25:08.501 [2024-12-06 15:51:56.965356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.965475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.965492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:08.501 [2024-12-06 15:51:56.965503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:08.501 [2024-12-06 15:51:56.965522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.967705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.967735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:08.501 [2024-12-06 15:51:56.967762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:25:08.501 [2024-12-06 15:51:56.967771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.969238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.969269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:08.501 [2024-12-06 15:51:56.969281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:25:08.501 [2024-12-06 15:51:56.969306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.970612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.970658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:08.501 [2024-12-06 15:51:56.970687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:25:08.501 [2024-12-06 15:51:56.970696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.971825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.501 [2024-12-06 15:51:56.971871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:08.501 [2024-12-06 15:51:56.971899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:25:08.501 [2024-12-06 15:51:56.971909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.501 [2024-12-06 15:51:56.971970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:08.501 [2024-12-06 15:51:56.971991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:08.501 [2024-12-06 15:51:56.972299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.972993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:08.502 [2024-12-06 15:51:56.973250] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:08.502 [2024-12-06 15:51:56.973261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:25:08.502 [2024-12-06 15:51:56.973272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:08.502 [2024-12-06 15:51:56.973297] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:08.502 [2024-12-06 15:51:56.973307] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:08.502 [2024-12-06 15:51:56.973317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:08.502 [2024-12-06 15:51:56.973327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:08.502 [2024-12-06 15:51:56.973337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:08.502 [2024-12-06 15:51:56.973348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:08.502 [2024-12-06 15:51:56.973357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:08.502 [2024-12-06 15:51:56.973366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:08.502 [2024-12-06 15:51:56.973376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.502 [2024-12-06 15:51:56.973387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:08.502 [2024-12-06 15:51:56.973416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:25:08.502 [2024-12-06 15:51:56.973444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.502 [2024-12-06 15:51:56.976032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.502 [2024-12-06 15:51:56.976068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:08.502 [2024-12-06 15:51:56.976080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:25:08.502 [2024-12-06 15:51:56.976091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.502 [2024-12-06 15:51:56.976276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.502 [2024-12-06 15:51:56.976296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:08.502 [2024-12-06 15:51:56.976317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:25:08.502 [2024-12-06 15:51:56.976328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:56.986213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:56.986252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.503 [2024-12-06 15:51:56.986268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:56.986281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:56.986367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:56.986383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.503 [2024-12-06 15:51:56.986409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:56.986420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:56.986516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:56.986535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.503 [2024-12-06 15:51:56.986547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:56.986558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:56.986580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:56.986595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.503 [2024-12-06 15:51:56.986612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:56.986623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.002262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.002312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.503 [2024-12-06 15:51:57.002328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.002339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.503 [2024-12-06 15:51:57.013220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.503 [2024-12-06 15:51:57.013383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.503 [2024-12-06 15:51:57.013483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.503 [2024-12-06 15:51:57.013624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.503 [2024-12-06 15:51:57.013722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.503 [2024-12-06 15:51:57.013815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.013888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.503 [2024-12-06 15:51:57.013910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.503 [2024-12-06 15:51:57.013923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.503 [2024-12-06 15:51:57.013952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.503 [2024-12-06 15:51:57.014121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.230 ms, result 0 00:25:08.762 00:25:08.762 00:25:08.762 15:51:57 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:08.762 [2024-12-06 15:51:57.415811] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:25:08.762 [2024-12-06 15:51:57.416028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90314 ] 00:25:09.021 [2024-12-06 15:51:57.571317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.021 [2024-12-06 15:51:57.610212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.281 [2024-12-06 15:51:57.755812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.281 [2024-12-06 15:51:57.755912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.281 [2024-12-06 15:51:57.913049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.913094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.281 [2024-12-06 15:51:57.913129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.281 [2024-12-06 15:51:57.913139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.913197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.913214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.281 [2024-12-06 15:51:57.913225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:09.281 [2024-12-06 15:51:57.913246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.913286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.281 [2024-12-06 15:51:57.913617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.281 [2024-12-06 15:51:57.913648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.913671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.281 [2024-12-06 15:51:57.913686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:25:09.281 [2024-12-06 15:51:57.913703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.916002] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.281 [2024-12-06 15:51:57.919479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.919516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.281 [2024-12-06 15:51:57.919530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:25:09.281 [2024-12-06 15:51:57.919556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.919617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.919643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.281 [2024-12-06 15:51:57.919655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:09.281 [2024-12-06 15:51:57.919665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.930790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.930827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.281 [2024-12-06 15:51:57.930847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.080 ms 00:25:09.281 [2024-12-06 15:51:57.930857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.930974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.930993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.281 [2024-12-06 15:51:57.931004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:09.281 [2024-12-06 15:51:57.931022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.931098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.281 [2024-12-06 15:51:57.931131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.281 [2024-12-06 15:51:57.931142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.281 [2024-12-06 15:51:57.931174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.281 [2024-12-06 15:51:57.931206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.281 [2024-12-06 15:51:57.933799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.282 [2024-12-06 15:51:57.933831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.282 [2024-12-06 15:51:57.933860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:25:09.282 [2024-12-06 15:51:57.933870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.282 [2024-12-06 15:51:57.933914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.282 [2024-12-06 15:51:57.933937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.282 [2024-12-06 15:51:57.933948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:09.282 [2024-12-06 15:51:57.933977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.282 [2024-12-06 15:51:57.934015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.282 [2024-12-06 15:51:57.934050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.282 [2024-12-06 15:51:57.934093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.282 [2024-12-06 15:51:57.934113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:09.282 [2024-12-06 15:51:57.934226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.282 [2024-12-06 15:51:57.934240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.282 [2024-12-06 15:51:57.934258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.282 [2024-12-06 15:51:57.934272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.282 [2024-12-06 15:51:57.934305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.282 [2024-12-06 15:51:57.934324] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.282 [2024-12-06 15:51:57.934333] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.282 [2024-12-06 15:51:57.934344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.282 [2024-12-06 15:51:57.934355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.282 [2024-12-06 15:51:57.934365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:25:09.282 [2024-12-06 15:51:57.934379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.282 [2024-12-06 15:51:57.934466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.282 [2024-12-06 15:51:57.934488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.282 [2024-12-06 15:51:57.934500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:09.282 [2024-12-06 15:51:57.934525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.282 [2024-12-06 15:51:57.934646] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.282 [2024-12-06 15:51:57.934664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.282 [2024-12-06 15:51:57.934675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.282 [2024-12-06 15:51:57.934706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.282 [2024-12-06 15:51:57.934736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.282 [2024-12-06 15:51:57.934755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.282 [2024-12-06 15:51:57.934769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.282 [2024-12-06 15:51:57.934780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.282 [2024-12-06 15:51:57.934789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.282 [2024-12-06 15:51:57.934802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.282 [2024-12-06 15:51:57.934812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.282 [2024-12-06 15:51:57.934832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.282 [2024-12-06 15:51:57.934863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.282 [2024-12-06 15:51:57.934892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.282 [2024-12-06 15:51:57.934920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.282 [2024-12-06 15:51:57.934955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.282 [2024-12-06 15:51:57.934978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.282 [2024-12-06 15:51:57.934991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.282 [2024-12-06 15:51:57.935001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.282 [2024-12-06 15:51:57.935011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.282 [2024-12-06 15:51:57.935020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.282 [2024-12-06 15:51:57.935044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.282 [2024-12-06 15:51:57.935054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.282 [2024-12-06 15:51:57.935063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.282 [2024-12-06 15:51:57.935073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.282 [2024-12-06 15:51:57.935082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.935091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.282 [2024-12-06 15:51:57.935101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.282 [2024-12-06 15:51:57.935111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.935124] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.282 [2024-12-06 15:51:57.935137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.282 [2024-12-06 15:51:57.935147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.282 [2024-12-06 15:51:57.935167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.282 [2024-12-06 15:51:57.935178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.282 [2024-12-06 15:51:57.935188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.282 [2024-12-06 15:51:57.935197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.282 [2024-12-06 15:51:57.935221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.282 [2024-12-06 15:51:57.935231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.282 [2024-12-06 15:51:57.935240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.282 [2024-12-06 15:51:57.935252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.282 [2024-12-06 15:51:57.935264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.282 [2024-12-06 15:51:57.935276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.282 [2024-12-06 15:51:57.935286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.282 [2024-12-06 15:51:57.935312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.282 [2024-12-06 15:51:57.935322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.282 [2024-12-06 15:51:57.935336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.282 [2024-12-06 15:51:57.935347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.282 [2024-12-06 15:51:57.935358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.282 [2024-12-06 15:51:57.935368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.282 [2024-12-06 15:51:57.935379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.282 [2024-12-06 15:51:57.935402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.282 [2024-12-06 15:51:57.935413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.282 [2024-12-06 15:51:57.935424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.282 [2024-12-06 15:51:57.935434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.282 [2024-12-06 15:51:57.935445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.282 [2024-12-06 15:51:57.935455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.282 [2024-12-06 15:51:57.935476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.283 [2024-12-06 15:51:57.935488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.283 [2024-12-06 15:51:57.935499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.283 [2024-12-06 15:51:57.935509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.283 [2024-12-06 15:51:57.935520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.283 [2024-12-06 15:51:57.935535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.283 [2024-12-06 15:51:57.935552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.283 [2024-12-06 15:51:57.935563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:25:09.283 [2024-12-06 15:51:57.935574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.283 [2024-12-06 15:51:57.953247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.283 [2024-12-06 15:51:57.953305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.283 [2024-12-06 15:51:57.953322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.594 ms 00:25:09.283 [2024-12-06 15:51:57.953332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.283 [2024-12-06 15:51:57.953439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.283 [2024-12-06 15:51:57.953454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.283 [2024-12-06 15:51:57.953464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:09.283 [2024-12-06 15:51:57.953474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.975956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.976029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.543 [2024-12-06 15:51:57.976061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:25:09.543 [2024-12-06 15:51:57.976072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.976145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.976161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.543 [2024-12-06 15:51:57.976173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.543 [2024-12-06 15:51:57.976183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.977224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.977270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.543 [2024-12-06 15:51:57.977299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:25:09.543 [2024-12-06 15:51:57.977309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.977494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.977512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.543 [2024-12-06 15:51:57.977539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:25:09.543 [2024-12-06 15:51:57.977550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.988100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.988135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.543 [2024-12-06 15:51:57.988161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.508 ms 00:25:09.543 [2024-12-06 15:51:57.988172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:57.991458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:09.543 [2024-12-06 15:51:57.991495] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.543 [2024-12-06 15:51:57.991516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:57.991527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.543 [2024-12-06 15:51:57.991537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:25:09.543 [2024-12-06 15:51:57.991547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.004576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.004612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.543 [2024-12-06 15:51:58.004628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.981 ms 00:25:09.543 [2024-12-06 15:51:58.004638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.006377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.006410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.543 [2024-12-06 15:51:58.006424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:25:09.543 [2024-12-06 15:51:58.006433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.007891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.007923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.543 [2024-12-06 15:51:58.007950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:25:09.543 [2024-12-06 15:51:58.007963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.008293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.008317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.543 [2024-12-06 15:51:58.008340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:25:09.543 [2024-12-06 15:51:58.008361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.032676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.032754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.543 [2024-12-06 15:51:58.032789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:25:09.543 [2024-12-06 15:51:58.032800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.039664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.543 [2024-12-06 15:51:58.041699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.041729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.543 [2024-12-06 15:51:58.041742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.836 ms 00:25:09.543 [2024-12-06 15:51:58.041757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.041853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.041871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.543 [2024-12-06 15:51:58.041895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.543 [2024-12-06 15:51:58.041905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.042035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.042053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.543 [2024-12-06 15:51:58.042071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:09.543 [2024-12-06 15:51:58.042082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.543 [2024-12-06 15:51:58.042113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.543 [2024-12-06 15:51:58.042126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.543 [2024-12-06 15:51:58.042137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.543 [2024-12-06 15:51:58.042147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.544 [2024-12-06 15:51:58.042191] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.544 [2024-12-06 15:51:58.042211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.544 [2024-12-06 15:51:58.042222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.544 [2024-12-06 15:51:58.042237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:09.544 [2024-12-06 15:51:58.042247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.544 [2024-12-06 15:51:58.046281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.544 [2024-12-06 15:51:58.046315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.544 [2024-12-06 15:51:58.046329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:25:09.544 [2024-12-06 15:51:58.046341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.544 [2024-12-06 15:51:58.046411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.544 [2024-12-06 15:51:58.046427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.544 [2024-12-06 15:51:58.046450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:09.544 [2024-12-06 15:51:58.046472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.544 [2024-12-06 15:51:58.048176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.564 ms, result 0 00:25:10.916  [2024-12-06T15:52:00.542Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T15:52:01.478Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-06T15:52:02.495Z] Copying: 68/1024 [MB] (23 MBps) [2024-12-06T15:52:03.429Z] Copying: 91/1024 [MB] (23 MBps) [2024-12-06T15:52:04.362Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-06T15:52:05.299Z] Copying: 138/1024 [MB] (23 MBps) [2024-12-06T15:52:06.235Z] Copying: 161/1024 [MB] (22 MBps) [2024-12-06T15:52:07.613Z] Copying: 184/1024 [MB] (23 MBps) [2024-12-06T15:52:08.552Z] Copying: 207/1024 [MB] (23 MBps) [2024-12-06T15:52:09.489Z] Copying: 230/1024 [MB] (23 MBps) [2024-12-06T15:52:10.427Z] Copying: 253/1024 [MB] (23 MBps) [2024-12-06T15:52:11.365Z] Copying: 276/1024 [MB] (23 MBps) [2024-12-06T15:52:12.302Z] Copying: 299/1024 [MB] (23 MBps) [2024-12-06T15:52:13.238Z] Copying: 322/1024 [MB] (23 MBps) [2024-12-06T15:52:14.614Z] Copying: 345/1024 [MB] (23 MBps) [2024-12-06T15:52:15.552Z] Copying: 368/1024 [MB] (23 MBps) [2024-12-06T15:52:16.488Z] Copying: 392/1024 [MB] (23 MBps) [2024-12-06T15:52:17.426Z] Copying: 416/1024 [MB] (23 MBps) [2024-12-06T15:52:18.362Z] Copying: 439/1024 [MB] (23 MBps) [2024-12-06T15:52:19.301Z] Copying: 463/1024 [MB] (23 MBps) [2024-12-06T15:52:20.237Z] Copying: 485/1024 [MB] (22 MBps) [2024-12-06T15:52:21.610Z] Copying: 508/1024 [MB] (23 MBps) [2024-12-06T15:52:22.545Z] Copying: 534/1024 [MB] (25 MBps) [2024-12-06T15:52:23.483Z] Copying: 558/1024 [MB] (24 MBps) [2024-12-06T15:52:24.420Z] Copying: 581/1024 [MB] (22 MBps) [2024-12-06T15:52:25.357Z] Copying: 603/1024 [MB] (22 MBps) [2024-12-06T15:52:26.291Z] Copying: 626/1024 [MB] (22 MBps) [2024-12-06T15:52:27.223Z] Copying: 649/1024 [MB] (22 MBps) [2024-12-06T15:52:28.597Z] Copying: 671/1024 [MB] (22 MBps) [2024-12-06T15:52:29.532Z] Copying: 694/1024 [MB] (22 MBps) [2024-12-06T15:52:30.519Z] Copying: 716/1024 [MB] (22 MBps) [2024-12-06T15:52:31.478Z] Copying: 739/1024 [MB] (22 MBps) [2024-12-06T15:52:32.413Z] Copying: 762/1024 [MB] (22 MBps) [2024-12-06T15:52:33.349Z] Copying: 785/1024 [MB] (23 MBps) [2024-12-06T15:52:34.282Z] Copying: 807/1024 [MB] (22 MBps) [2024-12-06T15:52:35.656Z] Copying: 831/1024 [MB] (23 MBps) [2024-12-06T15:52:36.222Z] Copying: 854/1024 [MB] (23 MBps) [2024-12-06T15:52:37.596Z] Copying: 878/1024 [MB] (23 MBps) [2024-12-06T15:52:38.533Z] Copying: 902/1024 [MB] (23 MBps) [2024-12-06T15:52:39.469Z] Copying: 925/1024 [MB] (23 MBps) [2024-12-06T15:52:40.407Z] Copying: 948/1024 [MB] (23 MBps) [2024-12-06T15:52:41.344Z] Copying: 971/1024 [MB] (23 MBps) [2024-12-06T15:52:42.282Z] Copying: 995/1024 [MB] (23 MBps) [2024-12-06T15:52:42.541Z] Copying: 1018/1024 [MB] (23 MBps) [2024-12-06T15:52:42.802Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 15:52:42.742530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.742657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.109 [2024-12-06 15:52:42.742692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:54.109 [2024-12-06 15:52:42.742741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.742790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.109 [2024-12-06 15:52:42.744310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.744347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.109 [2024-12-06 15:52:42.744367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:25:54.109 [2024-12-06 15:52:42.744385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.744819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.744882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.109 [2024-12-06 15:52:42.744906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:25:54.109 [2024-12-06 15:52:42.744933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.750267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.750312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.109 [2024-12-06 15:52:42.750335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.277 ms 00:25:54.109 [2024-12-06 15:52:42.750353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.757202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.757234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.109 [2024-12-06 15:52:42.757277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.796 ms 00:25:54.109 [2024-12-06 15:52:42.757288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.759095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.759140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.109 [2024-12-06 15:52:42.759156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:25:54.109 [2024-12-06 15:52:42.759183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.764091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.764144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.109 [2024-12-06 15:52:42.764176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:25:54.109 [2024-12-06 15:52:42.764187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.764289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.764306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.109 [2024-12-06 15:52:42.764319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:54.109 [2024-12-06 15:52:42.764330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.766982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.767217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:54.109 [2024-12-06 15:52:42.767245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:25:54.109 [2024-12-06 15:52:42.767268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.769051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.769090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:54.109 [2024-12-06 15:52:42.769122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:25:54.109 [2024-12-06 15:52:42.769132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.770629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.770835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.109 [2024-12-06 15:52:42.770862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:25:54.109 [2024-12-06 15:52:42.770875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.772297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.109 [2024-12-06 15:52:42.772366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:54.109 [2024-12-06 15:52:42.772397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:25:54.109 [2024-12-06 15:52:42.772407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.109 [2024-12-06 15:52:42.772442] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:54.109 [2024-12-06 15:52:42.772466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.772986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:54.109 [2024-12-06 15:52:42.773127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:54.110 [2024-12-06 15:52:42.773744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:54.110 [2024-12-06 15:52:42.773755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:25:54.110 [2024-12-06 15:52:42.773767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:54.110 [2024-12-06 15:52:42.773777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:54.110 [2024-12-06 15:52:42.773788] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:54.110 [2024-12-06 15:52:42.773810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:54.110 [2024-12-06 15:52:42.773821] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:54.110 [2024-12-06 15:52:42.773839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:54.110 [2024-12-06 15:52:42.773850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:54.110 [2024-12-06 15:52:42.773860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:54.110 [2024-12-06 15:52:42.773869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:54.110 [2024-12-06 15:52:42.773880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.110 [2024-12-06 15:52:42.773899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:54.110 [2024-12-06 15:52:42.773911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:25:54.110 [2024-12-06 15:52:42.773922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.777530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.110 [2024-12-06 15:52:42.777717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:54.110 [2024-12-06 15:52:42.777861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:25:54.110 [2024-12-06 15:52:42.777915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.778248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.110 [2024-12-06 15:52:42.778299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:54.110 [2024-12-06 15:52:42.778484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:25:54.110 [2024-12-06 15:52:42.778538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.789128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.110 [2024-12-06 15:52:42.789345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.110 [2024-12-06 15:52:42.789482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.110 [2024-12-06 15:52:42.789543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.789745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.110 [2024-12-06 15:52:42.789810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.110 [2024-12-06 15:52:42.789855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.110 [2024-12-06 15:52:42.789892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.790022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.110 [2024-12-06 15:52:42.790091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.110 [2024-12-06 15:52:42.790136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.110 [2024-12-06 15:52:42.790182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.110 [2024-12-06 15:52:42.790298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.110 [2024-12-06 15:52:42.790357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.110 [2024-12-06 15:52:42.790376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.110 [2024-12-06 15:52:42.790388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.807020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.807293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.370 [2024-12-06 15:52:42.807324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.807337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.370 [2024-12-06 15:52:42.819315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.370 [2024-12-06 15:52:42.819424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.370 [2024-12-06 15:52:42.819524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.370 [2024-12-06 15:52:42.819648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:54.370 [2024-12-06 15:52:42.819734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.370 [2024-12-06 15:52:42.819831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.819894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.370 [2024-12-06 15:52:42.819909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.370 [2024-12-06 15:52:42.819926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.370 [2024-12-06 15:52:42.819967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.370 [2024-12-06 15:52:42.820145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 77.573 ms, result 0 00:25:54.629 00:25:54.629 00:25:54.629 15:52:43 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.534 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:56.534 15:52:44 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:56.534 [2024-12-06 15:52:45.019493] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:25:56.534 [2024-12-06 15:52:45.019697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90789 ] 00:25:56.534 [2024-12-06 15:52:45.184816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.792 [2024-12-06 15:52:45.235785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.792 [2024-12-06 15:52:45.395874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.792 [2024-12-06 15:52:45.395992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.052 [2024-12-06 15:52:45.553313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.052 [2024-12-06 15:52:45.553361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.052 [2024-12-06 15:52:45.553390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.052 [2024-12-06 15:52:45.553400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.052 [2024-12-06 15:52:45.553463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.052 [2024-12-06 15:52:45.553486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.052 [2024-12-06 15:52:45.553506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:57.052 [2024-12-06 15:52:45.553525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.052 [2024-12-06 15:52:45.553561] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.052 [2024-12-06 15:52:45.553793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.052 [2024-12-06 15:52:45.553814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.052 [2024-12-06 15:52:45.553829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.052 [2024-12-06 15:52:45.553844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:57.052 [2024-12-06 15:52:45.553854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.052 [2024-12-06 15:52:45.556256] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.052 [2024-12-06 15:52:45.559757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.052 [2024-12-06 15:52:45.559796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.052 [2024-12-06 15:52:45.559839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.502 ms 00:25:57.052 [2024-12-06 15:52:45.559857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.559937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.559970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.053 [2024-12-06 15:52:45.559988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:57.053 [2024-12-06 15:52:45.560007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.571112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.571158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.053 [2024-12-06 15:52:45.571182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.037 ms 00:25:57.053 [2024-12-06 15:52:45.571193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.571305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.571323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.053 [2024-12-06 15:52:45.571335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:57.053 [2024-12-06 15:52:45.571345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.571428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.571445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.053 [2024-12-06 15:52:45.571456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:57.053 [2024-12-06 15:52:45.571472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.571519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.053 [2024-12-06 15:52:45.574245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.574279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.053 [2024-12-06 15:52:45.574293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.741 ms 00:25:57.053 [2024-12-06 15:52:45.574302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.574340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.574355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.053 [2024-12-06 15:52:45.574369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:57.053 [2024-12-06 15:52:45.574379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.574408] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.053 [2024-12-06 15:52:45.574435] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.053 [2024-12-06 15:52:45.574477] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.053 [2024-12-06 15:52:45.574497] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.053 [2024-12-06 15:52:45.574588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.053 [2024-12-06 15:52:45.574607] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.053 [2024-12-06 15:52:45.574620] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.053 [2024-12-06 15:52:45.574633] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.053 [2024-12-06 15:52:45.574645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.053 [2024-12-06 15:52:45.574664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.053 [2024-12-06 15:52:45.574674] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.053 [2024-12-06 15:52:45.574683] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.053 [2024-12-06 15:52:45.574692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.053 [2024-12-06 15:52:45.574702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.574712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.053 [2024-12-06 15:52:45.574721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:25:57.053 [2024-12-06 15:52:45.574737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.574811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.574824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.053 [2024-12-06 15:52:45.574834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:57.053 [2024-12-06 15:52:45.574843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.574957] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.053 [2024-12-06 15:52:45.574976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.053 [2024-12-06 15:52:45.574987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.053 [2024-12-06 15:52:45.574997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.053 [2024-12-06 15:52:45.575021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.053 [2024-12-06 15:52:45.575049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.053 [2024-12-06 15:52:45.575067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.053 [2024-12-06 15:52:45.575079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.053 [2024-12-06 15:52:45.575088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.053 [2024-12-06 15:52:45.575098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.053 [2024-12-06 15:52:45.575108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.053 [2024-12-06 15:52:45.575117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.053 [2024-12-06 15:52:45.575135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.053 [2024-12-06 15:52:45.575165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.053 [2024-12-06 15:52:45.575191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.053 [2024-12-06 15:52:45.575217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.053 [2024-12-06 15:52:45.575253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.053 [2024-12-06 15:52:45.575279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.053 [2024-12-06 15:52:45.575297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.053 [2024-12-06 15:52:45.575306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.053 [2024-12-06 15:52:45.575314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.053 [2024-12-06 15:52:45.575323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.053 [2024-12-06 15:52:45.575332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.053 [2024-12-06 15:52:45.575341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.053 [2024-12-06 15:52:45.575359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.053 [2024-12-06 15:52:45.575368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575380] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.053 [2024-12-06 15:52:45.575398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.053 [2024-12-06 15:52:45.575410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.053 [2024-12-06 15:52:45.575430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.053 [2024-12-06 15:52:45.575456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.053 [2024-12-06 15:52:45.575465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.053 [2024-12-06 15:52:45.575475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.053 [2024-12-06 15:52:45.575483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.053 [2024-12-06 15:52:45.575493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.053 [2024-12-06 15:52:45.575504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.053 [2024-12-06 15:52:45.575516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.053 [2024-12-06 15:52:45.575547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.053 [2024-12-06 15:52:45.575557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.053 [2024-12-06 15:52:45.575567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.053 [2024-12-06 15:52:45.575581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.053 [2024-12-06 15:52:45.575592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.053 [2024-12-06 15:52:45.575602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.053 [2024-12-06 15:52:45.575615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.053 [2024-12-06 15:52:45.575625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.053 [2024-12-06 15:52:45.575645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.053 [2024-12-06 15:52:45.575696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.053 [2024-12-06 15:52:45.575708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.053 [2024-12-06 15:52:45.575729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.053 [2024-12-06 15:52:45.575739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.053 [2024-12-06 15:52:45.575750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.053 [2024-12-06 15:52:45.575764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.575776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.053 [2024-12-06 15:52:45.575788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:25:57.053 [2024-12-06 15:52:45.575801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.593235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.593291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.053 [2024-12-06 15:52:45.593308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.350 ms 00:25:57.053 [2024-12-06 15:52:45.593319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.593413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.593427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.053 [2024-12-06 15:52:45.593439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:57.053 [2024-12-06 15:52:45.593449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.627336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.053 [2024-12-06 15:52:45.627380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.053 [2024-12-06 15:52:45.627407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.803 ms 00:25:57.053 [2024-12-06 15:52:45.627418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.053 [2024-12-06 15:52:45.627470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.627485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.054 [2024-12-06 15:52:45.627497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.054 [2024-12-06 15:52:45.627521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.628378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.628406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.054 [2024-12-06 15:52:45.628420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:25:57.054 [2024-12-06 15:52:45.628431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.628624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.628643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.054 [2024-12-06 15:52:45.628655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:25:57.054 [2024-12-06 15:52:45.628665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.639441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.639625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.054 [2024-12-06 15:52:45.639652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.744 ms 00:25:57.054 [2024-12-06 15:52:45.639677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.643186] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:57.054 [2024-12-06 15:52:45.643225] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.054 [2024-12-06 15:52:45.643263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.643274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.054 [2024-12-06 15:52:45.643285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:25:57.054 [2024-12-06 15:52:45.643295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.656408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.656446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.054 [2024-12-06 15:52:45.656461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.067 ms 00:25:57.054 [2024-12-06 15:52:45.656472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.658456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.658492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.054 [2024-12-06 15:52:45.658506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:25:57.054 [2024-12-06 15:52:45.658515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.660124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.660158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.054 [2024-12-06 15:52:45.660173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:25:57.054 [2024-12-06 15:52:45.660182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.660489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.660508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.054 [2024-12-06 15:52:45.660524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:25:57.054 [2024-12-06 15:52:45.660538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.684743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.685101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.054 [2024-12-06 15:52:45.685131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.143 ms 00:25:57.054 [2024-12-06 15:52:45.685146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.691836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.054 [2024-12-06 15:52:45.694036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.694069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.054 [2024-12-06 15:52:45.694096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.835 ms 00:25:57.054 [2024-12-06 15:52:45.694109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.694201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.694219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.054 [2024-12-06 15:52:45.694244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:57.054 [2024-12-06 15:52:45.694254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.694339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.694360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.054 [2024-12-06 15:52:45.694372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:57.054 [2024-12-06 15:52:45.694382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.694408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.694420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.054 [2024-12-06 15:52:45.694430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:57.054 [2024-12-06 15:52:45.694440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.694484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.054 [2024-12-06 15:52:45.694512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.694523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.054 [2024-12-06 15:52:45.694537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:57.054 [2024-12-06 15:52:45.694546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.698676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.698715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.054 [2024-12-06 15:52:45.698741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:25:57.054 [2024-12-06 15:52:45.698752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.698821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.054 [2024-12-06 15:52:45.698847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.054 [2024-12-06 15:52:45.698863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:57.054 [2024-12-06 15:52:45.698875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.054 [2024-12-06 15:52:45.700629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.734 ms, result 0 00:25:58.427  [2024-12-06T15:52:48.055Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T15:52:48.987Z] Copying: 45/1024 [MB] (23 MBps) [2024-12-06T15:52:49.920Z] Copying: 69/1024 [MB] (23 MBps) [2024-12-06T15:52:50.854Z] Copying: 92/1024 [MB] (23 MBps) [2024-12-06T15:52:51.789Z] Copying: 115/1024 [MB] (23 MBps) [2024-12-06T15:52:52.727Z] Copying: 139/1024 [MB] (23 MBps) [2024-12-06T15:52:54.105Z] Copying: 162/1024 [MB] (23 MBps) [2024-12-06T15:52:55.040Z] Copying: 185/1024 [MB] (23 MBps) [2024-12-06T15:52:55.975Z] Copying: 209/1024 [MB] (23 MBps) [2024-12-06T15:52:56.911Z] Copying: 232/1024 [MB] (23 MBps) [2024-12-06T15:52:57.848Z] Copying: 255/1024 [MB] (23 MBps) [2024-12-06T15:52:58.786Z] Copying: 278/1024 [MB] (23 MBps) [2024-12-06T15:52:59.753Z] Copying: 301/1024 [MB] (23 MBps) [2024-12-06T15:53:01.127Z] Copying: 325/1024 [MB] (23 MBps) [2024-12-06T15:53:02.063Z] Copying: 348/1024 [MB] (23 MBps) [2024-12-06T15:53:02.997Z] Copying: 371/1024 [MB] (23 MBps) [2024-12-06T15:53:03.932Z] Copying: 395/1024 [MB] (23 MBps) [2024-12-06T15:53:04.869Z] Copying: 419/1024 [MB] (24 MBps) [2024-12-06T15:53:05.803Z] Copying: 442/1024 [MB] (23 MBps) [2024-12-06T15:53:06.743Z] Copying: 467/1024 [MB] (24 MBps) [2024-12-06T15:53:08.120Z] Copying: 492/1024 [MB] (24 MBps) [2024-12-06T15:53:09.057Z] Copying: 517/1024 [MB] (24 MBps) [2024-12-06T15:53:09.994Z] Copying: 540/1024 [MB] (23 MBps) [2024-12-06T15:53:10.927Z] Copying: 563/1024 [MB] (22 MBps) [2024-12-06T15:53:11.863Z] Copying: 586/1024 [MB] (22 MBps) [2024-12-06T15:53:12.798Z] Copying: 609/1024 [MB] (22 MBps) [2024-12-06T15:53:13.732Z] Copying: 631/1024 [MB] (22 MBps) [2024-12-06T15:53:15.108Z] Copying: 654/1024 [MB] (22 MBps) [2024-12-06T15:53:16.042Z] Copying: 677/1024 [MB] (22 MBps) [2024-12-06T15:53:16.978Z] Copying: 700/1024 [MB] (22 MBps) [2024-12-06T15:53:17.914Z] Copying: 722/1024 [MB] (22 MBps) [2024-12-06T15:53:18.849Z] Copying: 743/1024 [MB] (20 MBps) [2024-12-06T15:53:19.786Z] Copying: 766/1024 [MB] (22 MBps) [2024-12-06T15:53:20.722Z] Copying: 788/1024 [MB] (22 MBps) [2024-12-06T15:53:22.098Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-06T15:53:23.035Z] Copying: 833/1024 [MB] (22 MBps) [2024-12-06T15:53:23.971Z] Copying: 855/1024 [MB] (22 MBps) [2024-12-06T15:53:24.907Z] Copying: 878/1024 [MB] (22 MBps) [2024-12-06T15:53:25.844Z] Copying: 900/1024 [MB] (22 MBps) [2024-12-06T15:53:26.778Z] Copying: 923/1024 [MB] (22 MBps) [2024-12-06T15:53:27.725Z] Copying: 946/1024 [MB] (22 MBps) [2024-12-06T15:53:29.118Z] Copying: 968/1024 [MB] (21 MBps) [2024-12-06T15:53:30.055Z] Copying: 990/1024 [MB] (21 MBps) [2024-12-06T15:53:30.993Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-06T15:53:31.252Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-06T15:53:31.252Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 15:53:31.197570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.197670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.559 [2024-12-06 15:53:31.197712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.559 [2024-12-06 15:53:31.197758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.559 [2024-12-06 15:53:31.200251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.559 [2024-12-06 15:53:31.205788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.205832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.559 [2024-12-06 15:53:31.205864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.428 ms 00:26:42.559 [2024-12-06 15:53:31.205876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.559 [2024-12-06 15:53:31.218092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.218140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.559 [2024-12-06 15:53:31.218175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.949 ms 00:26:42.559 [2024-12-06 15:53:31.218190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.559 [2024-12-06 15:53:31.241126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.241180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.559 [2024-12-06 15:53:31.241214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.913 ms 00:26:42.559 [2024-12-06 15:53:31.241233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.559 [2024-12-06 15:53:31.247288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.247337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:42.559 [2024-12-06 15:53:31.247351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.015 ms 00:26:42.559 [2024-12-06 15:53:31.247360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.559 [2024-12-06 15:53:31.248863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.559 [2024-12-06 15:53:31.249198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.559 [2024-12-06 15:53:31.249226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:26:42.559 [2024-12-06 15:53:31.249237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.253625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.253666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.820 [2024-12-06 15:53:31.253705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:26:42.820 [2024-12-06 15:53:31.253716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.367419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.367677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.820 [2024-12-06 15:53:31.367705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.664 ms 00:26:42.820 [2024-12-06 15:53:31.367718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.369791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.369845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:42.820 [2024-12-06 15:53:31.369859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:26:42.820 [2024-12-06 15:53:31.369868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.371455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.371490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:42.820 [2024-12-06 15:53:31.371504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:26:42.820 [2024-12-06 15:53:31.371513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.372907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.372985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:42.820 [2024-12-06 15:53:31.373003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:26:42.820 [2024-12-06 15:53:31.373013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.374303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.820 [2024-12-06 15:53:31.374505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:42.820 [2024-12-06 15:53:31.374530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:26:42.820 [2024-12-06 15:53:31.374542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.820 [2024-12-06 15:53:31.374584] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:42.820 [2024-12-06 15:53:31.374606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116992 / 261120 wr_cnt: 1 state: open 00:26:42.820 [2024-12-06 15:53:31.374620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.820 [2024-12-06 15:53:31.374720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.374989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:42.821 [2024-12-06 15:53:31.375786] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:42.822 [2024-12-06 15:53:31.375797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:26:42.822 [2024-12-06 15:53:31.375814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116992 00:26:42.822 [2024-12-06 15:53:31.375829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117952 00:26:42.822 [2024-12-06 15:53:31.375839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116992 00:26:42.822 [2024-12-06 15:53:31.375850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:26:42.822 [2024-12-06 15:53:31.375860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:42.822 [2024-12-06 15:53:31.375870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:42.822 [2024-12-06 15:53:31.375880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:42.822 [2024-12-06 15:53:31.375889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:42.822 [2024-12-06 15:53:31.375898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:42.822 [2024-12-06 15:53:31.375908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.822 [2024-12-06 15:53:31.375930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:42.822 [2024-12-06 15:53:31.375942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:26:42.822 [2024-12-06 15:53:31.375964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.378828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.822 [2024-12-06 15:53:31.378857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:42.822 [2024-12-06 15:53:31.378871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:26:42.822 [2024-12-06 15:53:31.378883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.379147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.822 [2024-12-06 15:53:31.379168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:42.822 [2024-12-06 15:53:31.379181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:42.822 [2024-12-06 15:53:31.379197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.388701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.388939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.822 [2024-12-06 15:53:31.388978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.388992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.389054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.389070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.822 [2024-12-06 15:53:31.389096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.389113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.389209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.389228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.822 [2024-12-06 15:53:31.389267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.389278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.389308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.389321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.822 [2024-12-06 15:53:31.389348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.389358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.404162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.404220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.822 [2024-12-06 15:53:31.404238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.404249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.822 [2024-12-06 15:53:31.415362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.822 [2024-12-06 15:53:31.415485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.822 [2024-12-06 15:53:31.415574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.822 [2024-12-06 15:53:31.415724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:42.822 [2024-12-06 15:53:31.415821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.415882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.415896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.822 [2024-12-06 15:53:31.415907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.415917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.416006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.822 [2024-12-06 15:53:31.416024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.822 [2024-12-06 15:53:31.416036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.822 [2024-12-06 15:53:31.416078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.822 [2024-12-06 15:53:31.416242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 219.704 ms, result 0 00:26:43.760 00:26:43.761 00:26:43.761 15:53:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:43.761 [2024-12-06 15:53:32.317594] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:26:43.761 [2024-12-06 15:53:32.317760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91263 ] 00:26:44.020 [2024-12-06 15:53:32.464930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.020 [2024-12-06 15:53:32.506215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.020 [2024-12-06 15:53:32.652975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.020 [2024-12-06 15:53:32.653067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.280 [2024-12-06 15:53:32.811091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.280 [2024-12-06 15:53:32.811142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:44.280 [2024-12-06 15:53:32.811166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.280 [2024-12-06 15:53:32.811178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.280 [2024-12-06 15:53:32.811237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.280 [2024-12-06 15:53:32.811255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.280 [2024-12-06 15:53:32.811267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:44.280 [2024-12-06 15:53:32.811294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.280 [2024-12-06 15:53:32.811331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:44.280 [2024-12-06 15:53:32.811571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:44.280 [2024-12-06 15:53:32.811595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.280 [2024-12-06 15:53:32.811615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.280 [2024-12-06 15:53:32.811632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:26:44.280 [2024-12-06 15:53:32.811642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.280 [2024-12-06 15:53:32.814515] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:44.280 [2024-12-06 15:53:32.818686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.280 [2024-12-06 15:53:32.818995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:44.280 [2024-12-06 15:53:32.819195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:26:44.281 [2024-12-06 15:53:32.819268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.819423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.819671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:44.281 [2024-12-06 15:53:32.819724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:44.281 [2024-12-06 15:53:32.819737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.833291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.833520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.281 [2024-12-06 15:53:32.833556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:26:44.281 [2024-12-06 15:53:32.833568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.833703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.833723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.281 [2024-12-06 15:53:32.833735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:44.281 [2024-12-06 15:53:32.833747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.833857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.833876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:44.281 [2024-12-06 15:53:32.833888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:44.281 [2024-12-06 15:53:32.833904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.833948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:44.281 [2024-12-06 15:53:32.836555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.836632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.281 [2024-12-06 15:53:32.836648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:26:44.281 [2024-12-06 15:53:32.836659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.836701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.836717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:44.281 [2024-12-06 15:53:32.836730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:44.281 [2024-12-06 15:53:32.836745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.836775] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:44.281 [2024-12-06 15:53:32.836806] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:44.281 [2024-12-06 15:53:32.836891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:44.281 [2024-12-06 15:53:32.836933] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:44.281 [2024-12-06 15:53:32.837051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:44.281 [2024-12-06 15:53:32.837090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:44.281 [2024-12-06 15:53:32.837108] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:44.281 [2024-12-06 15:53:32.837121] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837133] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837144] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:44.281 [2024-12-06 15:53:32.837154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:44.281 [2024-12-06 15:53:32.837164] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:44.281 [2024-12-06 15:53:32.837174] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:44.281 [2024-12-06 15:53:32.837185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.837196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:44.281 [2024-12-06 15:53:32.837228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:26:44.281 [2024-12-06 15:53:32.837238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.837337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.281 [2024-12-06 15:53:32.837351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:44.281 [2024-12-06 15:53:32.837362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:44.281 [2024-12-06 15:53:32.837373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.281 [2024-12-06 15:53:32.837486] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:44.281 [2024-12-06 15:53:32.837509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:44.281 [2024-12-06 15:53:32.837522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:44.281 [2024-12-06 15:53:32.837555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:44.281 [2024-12-06 15:53:32.837588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.281 [2024-12-06 15:53:32.837624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:44.281 [2024-12-06 15:53:32.837633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:44.281 [2024-12-06 15:53:32.837643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.281 [2024-12-06 15:53:32.837652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:44.281 [2024-12-06 15:53:32.837662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:44.281 [2024-12-06 15:53:32.837671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:44.281 [2024-12-06 15:53:32.837694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:44.281 [2024-12-06 15:53:32.837723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:44.281 [2024-12-06 15:53:32.837751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:44.281 [2024-12-06 15:53:32.837779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:44.281 [2024-12-06 15:53:32.837808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.281 [2024-12-06 15:53:32.837828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:44.281 [2024-12-06 15:53:32.837840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.281 [2024-12-06 15:53:32.837859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:44.281 [2024-12-06 15:53:32.837869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:44.281 [2024-12-06 15:53:32.837879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.281 [2024-12-06 15:53:32.837888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:44.281 [2024-12-06 15:53:32.837898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:44.281 [2024-12-06 15:53:32.837907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:44.281 [2024-12-06 15:53:32.837926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:44.281 [2024-12-06 15:53:32.837936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.837961] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:44.281 [2024-12-06 15:53:32.837976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:44.281 [2024-12-06 15:53:32.838017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.281 [2024-12-06 15:53:32.838029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.281 [2024-12-06 15:53:32.838041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:44.281 [2024-12-06 15:53:32.838054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:44.281 [2024-12-06 15:53:32.838065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:44.281 [2024-12-06 15:53:32.838076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:44.281 [2024-12-06 15:53:32.838086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:44.281 [2024-12-06 15:53:32.838096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:44.281 [2024-12-06 15:53:32.838112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:44.281 [2024-12-06 15:53:32.838124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.281 [2024-12-06 15:53:32.838150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:44.282 [2024-12-06 15:53:32.838161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:44.282 [2024-12-06 15:53:32.838171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:44.282 [2024-12-06 15:53:32.838181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:44.282 [2024-12-06 15:53:32.838192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:44.282 [2024-12-06 15:53:32.838203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:44.282 [2024-12-06 15:53:32.838214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:44.282 [2024-12-06 15:53:32.838225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:44.282 [2024-12-06 15:53:32.838236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:44.282 [2024-12-06 15:53:32.838261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:44.282 [2024-12-06 15:53:32.838371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:44.282 [2024-12-06 15:53:32.838383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:44.282 [2024-12-06 15:53:32.838406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:44.282 [2024-12-06 15:53:32.838417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:44.282 [2024-12-06 15:53:32.838444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:44.282 [2024-12-06 15:53:32.838456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.838468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:44.282 [2024-12-06 15:53:32.838480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:26:44.282 [2024-12-06 15:53:32.838491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.857657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.857928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.282 [2024-12-06 15:53:32.857992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.073 ms 00:26:44.282 [2024-12-06 15:53:32.858010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.858124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.858140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:44.282 [2024-12-06 15:53:32.858159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:44.282 [2024-12-06 15:53:32.858172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.889651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.889694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.282 [2024-12-06 15:53:32.889712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.380 ms 00:26:44.282 [2024-12-06 15:53:32.889735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.889793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.889810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.282 [2024-12-06 15:53:32.889822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.282 [2024-12-06 15:53:32.889833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.890844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.890875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.282 [2024-12-06 15:53:32.890907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:26:44.282 [2024-12-06 15:53:32.890926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.891154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.891174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.282 [2024-12-06 15:53:32.891186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:26:44.282 [2024-12-06 15:53:32.891198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.901950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.901987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.282 [2024-12-06 15:53:32.902004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.703 ms 00:26:44.282 [2024-12-06 15:53:32.902014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.905748] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:44.282 [2024-12-06 15:53:32.905794] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:44.282 [2024-12-06 15:53:32.905817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.905829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:44.282 [2024-12-06 15:53:32.905841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:26:44.282 [2024-12-06 15:53:32.905854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.919046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.919089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:44.282 [2024-12-06 15:53:32.919106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.150 ms 00:26:44.282 [2024-12-06 15:53:32.919117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.921093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.921130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:44.282 [2024-12-06 15:53:32.921146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:26:44.282 [2024-12-06 15:53:32.921156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.922823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.922860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:44.282 [2024-12-06 15:53:32.922877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:26:44.282 [2024-12-06 15:53:32.922887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.923268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.923336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:44.282 [2024-12-06 15:53:32.923350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:26:44.282 [2024-12-06 15:53:32.923385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.947999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.948376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:44.282 [2024-12-06 15:53:32.948438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.585 ms 00:26:44.282 [2024-12-06 15:53:32.948452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.955737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:44.282 [2024-12-06 15:53:32.958065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.958099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:44.282 [2024-12-06 15:53:32.958114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.555 ms 00:26:44.282 [2024-12-06 15:53:32.958125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.958215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.958234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:44.282 [2024-12-06 15:53:32.958246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:44.282 [2024-12-06 15:53:32.958275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.960561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.960606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:44.282 [2024-12-06 15:53:32.960632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:26:44.282 [2024-12-06 15:53:32.960643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.960681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.960696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:44.282 [2024-12-06 15:53:32.960708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:44.282 [2024-12-06 15:53:32.960718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.960780] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:44.282 [2024-12-06 15:53:32.960798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.960808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:44.282 [2024-12-06 15:53:32.960829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:44.282 [2024-12-06 15:53:32.960849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.965366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.282 [2024-12-06 15:53:32.965539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:44.282 [2024-12-06 15:53:32.965565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.487 ms 00:26:44.282 [2024-12-06 15:53:32.965578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.282 [2024-12-06 15:53:32.965707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.283 [2024-12-06 15:53:32.965727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:44.283 [2024-12-06 15:53:32.965740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:44.283 [2024-12-06 15:53:32.965760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.283 [2024-12-06 15:53:32.968970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.619 ms, result 0 00:26:45.662  [2024-12-06T15:53:35.291Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T15:53:36.228Z] Copying: 41/1024 [MB] (22 MBps) [2024-12-06T15:53:37.165Z] Copying: 63/1024 [MB] (22 MBps) [2024-12-06T15:53:38.540Z] Copying: 85/1024 [MB] (22 MBps) [2024-12-06T15:53:39.475Z] Copying: 108/1024 [MB] (22 MBps) [2024-12-06T15:53:40.408Z] Copying: 130/1024 [MB] (22 MBps) [2024-12-06T15:53:41.342Z] Copying: 153/1024 [MB] (22 MBps) [2024-12-06T15:53:42.279Z] Copying: 175/1024 [MB] (22 MBps) [2024-12-06T15:53:43.216Z] Copying: 198/1024 [MB] (22 MBps) [2024-12-06T15:53:44.153Z] Copying: 220/1024 [MB] (22 MBps) [2024-12-06T15:53:45.531Z] Copying: 242/1024 [MB] (22 MBps) [2024-12-06T15:53:46.465Z] Copying: 264/1024 [MB] (21 MBps) [2024-12-06T15:53:47.400Z] Copying: 286/1024 [MB] (21 MBps) [2024-12-06T15:53:48.335Z] Copying: 308/1024 [MB] (21 MBps) [2024-12-06T15:53:49.273Z] Copying: 330/1024 [MB] (21 MBps) [2024-12-06T15:53:50.206Z] Copying: 352/1024 [MB] (21 MBps) [2024-12-06T15:53:51.583Z] Copying: 374/1024 [MB] (21 MBps) [2024-12-06T15:53:52.150Z] Copying: 395/1024 [MB] (21 MBps) [2024-12-06T15:53:53.528Z] Copying: 418/1024 [MB] (22 MBps) [2024-12-06T15:53:54.465Z] Copying: 440/1024 [MB] (22 MBps) [2024-12-06T15:53:55.403Z] Copying: 462/1024 [MB] (22 MBps) [2024-12-06T15:53:56.365Z] Copying: 485/1024 [MB] (22 MBps) [2024-12-06T15:53:57.345Z] Copying: 507/1024 [MB] (22 MBps) [2024-12-06T15:53:58.281Z] Copying: 530/1024 [MB] (23 MBps) [2024-12-06T15:53:59.215Z] Copying: 553/1024 [MB] (22 MBps) [2024-12-06T15:54:00.149Z] Copying: 576/1024 [MB] (22 MBps) [2024-12-06T15:54:01.523Z] Copying: 599/1024 [MB] (22 MBps) [2024-12-06T15:54:02.459Z] Copying: 621/1024 [MB] (22 MBps) [2024-12-06T15:54:03.396Z] Copying: 643/1024 [MB] (22 MBps) [2024-12-06T15:54:04.333Z] Copying: 666/1024 [MB] (22 MBps) [2024-12-06T15:54:05.277Z] Copying: 688/1024 [MB] (22 MBps) [2024-12-06T15:54:06.215Z] Copying: 711/1024 [MB] (22 MBps) [2024-12-06T15:54:07.153Z] Copying: 733/1024 [MB] (22 MBps) [2024-12-06T15:54:08.530Z] Copying: 755/1024 [MB] (22 MBps) [2024-12-06T15:54:09.467Z] Copying: 778/1024 [MB] (22 MBps) [2024-12-06T15:54:10.404Z] Copying: 800/1024 [MB] (22 MBps) [2024-12-06T15:54:11.339Z] Copying: 824/1024 [MB] (23 MBps) [2024-12-06T15:54:12.274Z] Copying: 846/1024 [MB] (22 MBps) [2024-12-06T15:54:13.210Z] Copying: 869/1024 [MB] (22 MBps) [2024-12-06T15:54:14.585Z] Copying: 892/1024 [MB] (22 MBps) [2024-12-06T15:54:15.151Z] Copying: 915/1024 [MB] (23 MBps) [2024-12-06T15:54:16.523Z] Copying: 938/1024 [MB] (22 MBps) [2024-12-06T15:54:17.458Z] Copying: 960/1024 [MB] (22 MBps) [2024-12-06T15:54:18.395Z] Copying: 983/1024 [MB] (22 MBps) [2024-12-06T15:54:18.962Z] Copying: 1006/1024 [MB] (22 MBps) [2024-12-06T15:54:19.900Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 15:54:19.538544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.538632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:31.207 [2024-12-06 15:54:19.538661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:31.207 [2024-12-06 15:54:19.538677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.538718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.207 [2024-12-06 15:54:19.539843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.539879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:31.207 [2024-12-06 15:54:19.539903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:27:31.207 [2024-12-06 15:54:19.539920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.540252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.540288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:31.207 [2024-12-06 15:54:19.540305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:27:31.207 [2024-12-06 15:54:19.540320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.546258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.546451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:31.207 [2024-12-06 15:54:19.546595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.909 ms 00:27:31.207 [2024-12-06 15:54:19.546736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.555497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.555973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:31.207 [2024-12-06 15:54:19.556141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.646 ms 00:27:31.207 [2024-12-06 15:54:19.556223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.557866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.558115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:31.207 [2024-12-06 15:54:19.558259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:27:31.207 [2024-12-06 15:54:19.558289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.562721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.562774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:31.207 [2024-12-06 15:54:19.562796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:27:31.207 [2024-12-06 15:54:19.562822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.690074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.690156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:31.207 [2024-12-06 15:54:19.690182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.198 ms 00:27:31.207 [2024-12-06 15:54:19.690199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.692254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.692450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:31.207 [2024-12-06 15:54:19.692484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:27:31.207 [2024-12-06 15:54:19.692500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.694279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.694330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:31.207 [2024-12-06 15:54:19.694352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:27:31.207 [2024-12-06 15:54:19.694367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.695782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.695974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:31.207 [2024-12-06 15:54:19.696005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:27:31.207 [2024-12-06 15:54:19.696022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.697381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.207 [2024-12-06 15:54:19.697430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:31.207 [2024-12-06 15:54:19.697449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:27:31.207 [2024-12-06 15:54:19.697463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.207 [2024-12-06 15:54:19.697511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:31.207 [2024-12-06 15:54:19.697556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:31.207 [2024-12-06 15:54:19.697575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.697988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.698004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.698020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:31.207 [2024-12-06 15:54:19.698035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.698988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:31.208 [2024-12-06 15:54:19.699226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:31.208 [2024-12-06 15:54:19.699259] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b44dbde8-f5b0-45a1-aca2-ca3e1c53ab69 00:27:31.208 [2024-12-06 15:54:19.699275] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:31.208 [2024-12-06 15:54:19.699297] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15040 00:27:31.208 [2024-12-06 15:54:19.699318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14080 00:27:31.208 [2024-12-06 15:54:19.699335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0682 00:27:31.208 [2024-12-06 15:54:19.699351] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:31.208 [2024-12-06 15:54:19.699377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:31.208 [2024-12-06 15:54:19.699393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:31.208 [2024-12-06 15:54:19.699407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:31.208 [2024-12-06 15:54:19.699421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:31.208 [2024-12-06 15:54:19.699437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.208 [2024-12-06 15:54:19.699453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:31.208 [2024-12-06 15:54:19.699469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:27:31.208 [2024-12-06 15:54:19.699484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.208 [2024-12-06 15:54:19.702962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.208 [2024-12-06 15:54:19.703126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:31.208 [2024-12-06 15:54:19.703280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:27:31.208 [2024-12-06 15:54:19.703346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.208 [2024-12-06 15:54:19.703661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.208 [2024-12-06 15:54:19.703796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:31.208 [2024-12-06 15:54:19.703956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:27:31.208 [2024-12-06 15:54:19.704026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.208 [2024-12-06 15:54:19.715473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.208 [2024-12-06 15:54:19.715667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.209 [2024-12-06 15:54:19.715807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.715869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.716117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.716292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.209 [2024-12-06 15:54:19.716438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.716574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.716857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.717038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.209 [2024-12-06 15:54:19.717198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.717266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.717420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.717558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.209 [2024-12-06 15:54:19.717710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.717776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.735315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.735574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.209 [2024-12-06 15:54:19.735610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.735627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.749214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.749280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.209 [2024-12-06 15:54:19.749305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.749321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.749423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.749446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.209 [2024-12-06 15:54:19.749463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.749478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.749577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.749601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.209 [2024-12-06 15:54:19.749617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.749632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.749761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.749794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.209 [2024-12-06 15:54:19.749812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.749828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.749903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.749927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:31.209 [2024-12-06 15:54:19.749992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.750010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.750075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.750102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.209 [2024-12-06 15:54:19.750118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.750133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.750204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.209 [2024-12-06 15:54:19.750225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.209 [2024-12-06 15:54:19.750241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.209 [2024-12-06 15:54:19.750257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.209 [2024-12-06 15:54:19.750482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.878 ms, result 0 00:27:31.468 00:27:31.468 00:27:31.468 15:54:20 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:33.370 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:33.370 15:54:21 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:33.370 15:54:21 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:33.370 15:54:21 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:33.370 Process with pid 89663 is not found 00:27:33.370 Remove shared memory files 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89663 00:27:33.370 15:54:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 89663 ']' 00:27:33.370 15:54:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 89663 00:27:33.370 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89663) - No such process 00:27:33.370 15:54:22 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 89663 is not found' 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:33.370 15:54:22 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:33.628 ************************************ 00:27:33.628 END TEST ftl_restore 00:27:33.628 ************************************ 00:27:33.628 00:27:33.628 real 3m26.197s 00:27:33.628 user 3m12.303s 00:27:33.628 sys 0m14.937s 00:27:33.628 15:54:22 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.628 15:54:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:33.628 15:54:22 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:33.628 15:54:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:33.628 15:54:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.628 15:54:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:33.628 ************************************ 00:27:33.628 START TEST ftl_dirty_shutdown 00:27:33.628 ************************************ 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:33.628 * Looking for test storage... 00:27:33.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.628 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.887 --rc genhtml_branch_coverage=1 00:27:33.887 --rc genhtml_function_coverage=1 00:27:33.887 --rc genhtml_legend=1 00:27:33.887 --rc geninfo_all_blocks=1 00:27:33.887 --rc geninfo_unexecuted_blocks=1 00:27:33.887 00:27:33.887 ' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.887 --rc genhtml_branch_coverage=1 00:27:33.887 --rc genhtml_function_coverage=1 00:27:33.887 --rc genhtml_legend=1 00:27:33.887 --rc geninfo_all_blocks=1 00:27:33.887 --rc geninfo_unexecuted_blocks=1 00:27:33.887 00:27:33.887 ' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.887 --rc genhtml_branch_coverage=1 00:27:33.887 --rc genhtml_function_coverage=1 00:27:33.887 --rc genhtml_legend=1 00:27:33.887 --rc geninfo_all_blocks=1 00:27:33.887 --rc geninfo_unexecuted_blocks=1 00:27:33.887 00:27:33.887 ' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.887 --rc genhtml_branch_coverage=1 00:27:33.887 --rc genhtml_function_coverage=1 00:27:33.887 --rc genhtml_legend=1 00:27:33.887 --rc geninfo_all_blocks=1 00:27:33.887 --rc geninfo_unexecuted_blocks=1 00:27:33.887 00:27:33.887 ' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91821 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91821 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91821 ']' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.887 15:54:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.887 [2024-12-06 15:54:22.453275] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:27:33.888 [2024-12-06 15:54:22.453425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91821 ] 00:27:34.146 [2024-12-06 15:54:22.609758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.146 [2024-12-06 15:54:22.665583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:35.080 15:54:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:35.338 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:35.338 { 00:27:35.338 "name": "nvme0n1", 00:27:35.338 "aliases": [ 00:27:35.338 "1f0b9c4e-d606-46a3-b897-d9c805411650" 00:27:35.338 ], 00:27:35.338 "product_name": "NVMe disk", 00:27:35.338 "block_size": 4096, 00:27:35.338 "num_blocks": 1310720, 00:27:35.338 "uuid": "1f0b9c4e-d606-46a3-b897-d9c805411650", 00:27:35.338 "numa_id": -1, 00:27:35.338 "assigned_rate_limits": { 00:27:35.338 "rw_ios_per_sec": 0, 00:27:35.338 "rw_mbytes_per_sec": 0, 00:27:35.338 "r_mbytes_per_sec": 0, 00:27:35.338 "w_mbytes_per_sec": 0 00:27:35.338 }, 00:27:35.338 "claimed": true, 00:27:35.338 "claim_type": "read_many_write_one", 00:27:35.338 "zoned": false, 00:27:35.338 "supported_io_types": { 00:27:35.338 "read": true, 00:27:35.338 "write": true, 00:27:35.338 "unmap": true, 00:27:35.338 "flush": true, 00:27:35.338 "reset": true, 00:27:35.338 "nvme_admin": true, 00:27:35.338 "nvme_io": true, 00:27:35.338 "nvme_io_md": false, 00:27:35.338 "write_zeroes": true, 00:27:35.338 "zcopy": false, 00:27:35.338 "get_zone_info": false, 00:27:35.338 "zone_management": false, 00:27:35.338 "zone_append": false, 00:27:35.338 "compare": true, 00:27:35.338 "compare_and_write": false, 00:27:35.338 "abort": true, 00:27:35.338 "seek_hole": false, 00:27:35.338 "seek_data": false, 00:27:35.338 "copy": true, 00:27:35.338 "nvme_iov_md": false 00:27:35.338 }, 00:27:35.338 "driver_specific": { 00:27:35.338 "nvme": [ 00:27:35.338 { 00:27:35.338 "pci_address": "0000:00:11.0", 00:27:35.338 "trid": { 00:27:35.338 "trtype": "PCIe", 00:27:35.338 "traddr": "0000:00:11.0" 00:27:35.338 }, 00:27:35.338 "ctrlr_data": { 00:27:35.338 "cntlid": 0, 00:27:35.338 "vendor_id": "0x1b36", 00:27:35.338 "model_number": "QEMU NVMe Ctrl", 00:27:35.338 "serial_number": "12341", 00:27:35.338 "firmware_revision": "8.0.0", 00:27:35.338 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:35.338 "oacs": { 00:27:35.338 "security": 0, 00:27:35.338 "format": 1, 00:27:35.338 "firmware": 0, 00:27:35.338 "ns_manage": 1 00:27:35.338 }, 00:27:35.338 "multi_ctrlr": false, 00:27:35.338 "ana_reporting": false 00:27:35.338 }, 00:27:35.338 "vs": { 00:27:35.338 "nvme_version": "1.4" 00:27:35.338 }, 00:27:35.338 "ns_data": { 00:27:35.338 "id": 1, 00:27:35.338 "can_share": false 00:27:35.338 } 00:27:35.338 } 00:27:35.338 ], 00:27:35.338 "mp_policy": "active_passive" 00:27:35.338 } 00:27:35.338 } 00:27:35.338 ]' 00:27:35.338 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:35.596 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:35.857 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ba0c5666-8370-4f45-974a-c93ec0126e0f 00:27:35.857 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:35.857 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba0c5666-8370-4f45-974a-c93ec0126e0f 00:27:36.128 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:36.403 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=b30ea659-6775-439b-958c-14a06bb7e9ea 00:27:36.403 15:54:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b30ea659-6775-439b-958c-14a06bb7e9ea 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:36.676 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:36.934 { 00:27:36.934 "name": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:36.934 "aliases": [ 00:27:36.934 "lvs/nvme0n1p0" 00:27:36.934 ], 00:27:36.934 "product_name": "Logical Volume", 00:27:36.934 "block_size": 4096, 00:27:36.934 "num_blocks": 26476544, 00:27:36.934 "uuid": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:36.934 "assigned_rate_limits": { 00:27:36.934 "rw_ios_per_sec": 0, 00:27:36.934 "rw_mbytes_per_sec": 0, 00:27:36.934 "r_mbytes_per_sec": 0, 00:27:36.934 "w_mbytes_per_sec": 0 00:27:36.934 }, 00:27:36.934 "claimed": false, 00:27:36.934 "zoned": false, 00:27:36.934 "supported_io_types": { 00:27:36.934 "read": true, 00:27:36.934 "write": true, 00:27:36.934 "unmap": true, 00:27:36.934 "flush": false, 00:27:36.934 "reset": true, 00:27:36.934 "nvme_admin": false, 00:27:36.934 "nvme_io": false, 00:27:36.934 "nvme_io_md": false, 00:27:36.934 "write_zeroes": true, 00:27:36.934 "zcopy": false, 00:27:36.934 "get_zone_info": false, 00:27:36.934 "zone_management": false, 00:27:36.934 "zone_append": false, 00:27:36.934 "compare": false, 00:27:36.934 "compare_and_write": false, 00:27:36.934 "abort": false, 00:27:36.934 "seek_hole": true, 00:27:36.934 "seek_data": true, 00:27:36.934 "copy": false, 00:27:36.934 "nvme_iov_md": false 00:27:36.934 }, 00:27:36.934 "driver_specific": { 00:27:36.934 "lvol": { 00:27:36.934 "lvol_store_uuid": "b30ea659-6775-439b-958c-14a06bb7e9ea", 00:27:36.934 "base_bdev": "nvme0n1", 00:27:36.934 "thin_provision": true, 00:27:36.934 "num_allocated_clusters": 0, 00:27:36.934 "snapshot": false, 00:27:36.934 "clone": false, 00:27:36.934 "esnap_clone": false 00:27:36.934 } 00:27:36.934 } 00:27:36.934 } 00:27:36.934 ]' 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:36.934 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:37.192 15:54:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:37.451 { 00:27:37.451 "name": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:37.451 "aliases": [ 00:27:37.451 "lvs/nvme0n1p0" 00:27:37.451 ], 00:27:37.451 "product_name": "Logical Volume", 00:27:37.451 "block_size": 4096, 00:27:37.451 "num_blocks": 26476544, 00:27:37.451 "uuid": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:37.451 "assigned_rate_limits": { 00:27:37.451 "rw_ios_per_sec": 0, 00:27:37.451 "rw_mbytes_per_sec": 0, 00:27:37.451 "r_mbytes_per_sec": 0, 00:27:37.451 "w_mbytes_per_sec": 0 00:27:37.451 }, 00:27:37.451 "claimed": false, 00:27:37.451 "zoned": false, 00:27:37.451 "supported_io_types": { 00:27:37.451 "read": true, 00:27:37.451 "write": true, 00:27:37.451 "unmap": true, 00:27:37.451 "flush": false, 00:27:37.451 "reset": true, 00:27:37.451 "nvme_admin": false, 00:27:37.451 "nvme_io": false, 00:27:37.451 "nvme_io_md": false, 00:27:37.451 "write_zeroes": true, 00:27:37.451 "zcopy": false, 00:27:37.451 "get_zone_info": false, 00:27:37.451 "zone_management": false, 00:27:37.451 "zone_append": false, 00:27:37.451 "compare": false, 00:27:37.451 "compare_and_write": false, 00:27:37.451 "abort": false, 00:27:37.451 "seek_hole": true, 00:27:37.451 "seek_data": true, 00:27:37.451 "copy": false, 00:27:37.451 "nvme_iov_md": false 00:27:37.451 }, 00:27:37.451 "driver_specific": { 00:27:37.451 "lvol": { 00:27:37.451 "lvol_store_uuid": "b30ea659-6775-439b-958c-14a06bb7e9ea", 00:27:37.451 "base_bdev": "nvme0n1", 00:27:37.451 "thin_provision": true, 00:27:37.451 "num_allocated_clusters": 0, 00:27:37.451 "snapshot": false, 00:27:37.451 "clone": false, 00:27:37.451 "esnap_clone": false 00:27:37.451 } 00:27:37.451 } 00:27:37.451 } 00:27:37.451 ]' 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:37.451 15:54:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:37.710 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58d23dbd-2189-4884-a592-bfe36d89551b 00:27:37.968 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:37.968 { 00:27:37.968 "name": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:37.968 "aliases": [ 00:27:37.968 "lvs/nvme0n1p0" 00:27:37.968 ], 00:27:37.968 "product_name": "Logical Volume", 00:27:37.968 "block_size": 4096, 00:27:37.968 "num_blocks": 26476544, 00:27:37.968 "uuid": "58d23dbd-2189-4884-a592-bfe36d89551b", 00:27:37.968 "assigned_rate_limits": { 00:27:37.968 "rw_ios_per_sec": 0, 00:27:37.968 "rw_mbytes_per_sec": 0, 00:27:37.968 "r_mbytes_per_sec": 0, 00:27:37.968 "w_mbytes_per_sec": 0 00:27:37.968 }, 00:27:37.968 "claimed": false, 00:27:37.968 "zoned": false, 00:27:37.968 "supported_io_types": { 00:27:37.968 "read": true, 00:27:37.968 "write": true, 00:27:37.968 "unmap": true, 00:27:37.968 "flush": false, 00:27:37.968 "reset": true, 00:27:37.968 "nvme_admin": false, 00:27:37.968 "nvme_io": false, 00:27:37.968 "nvme_io_md": false, 00:27:37.968 "write_zeroes": true, 00:27:37.968 "zcopy": false, 00:27:37.968 "get_zone_info": false, 00:27:37.968 "zone_management": false, 00:27:37.968 "zone_append": false, 00:27:37.968 "compare": false, 00:27:37.968 "compare_and_write": false, 00:27:37.968 "abort": false, 00:27:37.968 "seek_hole": true, 00:27:37.968 "seek_data": true, 00:27:37.968 "copy": false, 00:27:37.968 "nvme_iov_md": false 00:27:37.968 }, 00:27:37.968 "driver_specific": { 00:27:37.968 "lvol": { 00:27:37.968 "lvol_store_uuid": "b30ea659-6775-439b-958c-14a06bb7e9ea", 00:27:37.968 "base_bdev": "nvme0n1", 00:27:37.968 "thin_provision": true, 00:27:37.968 "num_allocated_clusters": 0, 00:27:37.968 "snapshot": false, 00:27:37.968 "clone": false, 00:27:37.968 "esnap_clone": false 00:27:37.968 } 00:27:37.968 } 00:27:37.968 } 00:27:37.968 ]' 00:27:37.968 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:37.968 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:37.968 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 58d23dbd-2189-4884-a592-bfe36d89551b --l2p_dram_limit 10' 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:38.225 15:54:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 58d23dbd-2189-4884-a592-bfe36d89551b --l2p_dram_limit 10 -c nvc0n1p0 00:27:38.484 [2024-12-06 15:54:26.942834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.942894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.484 [2024-12-06 15:54:26.942923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:38.484 [2024-12-06 15:54:26.942981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.943049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.943069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.484 [2024-12-06 15:54:26.943085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:38.484 [2024-12-06 15:54:26.943101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.943128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.484 [2024-12-06 15:54:26.943420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.484 [2024-12-06 15:54:26.943451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.943465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.484 [2024-12-06 15:54:26.943477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:27:38.484 [2024-12-06 15:54:26.943490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.943577] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b4e0e507-6c02-47f6-bfa2-3cd271d0b40b 00:27:38.484 [2024-12-06 15:54:26.945724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.945888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:38.484 [2024-12-06 15:54:26.946060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:38.484 [2024-12-06 15:54:26.946109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.958169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.958399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.484 [2024-12-06 15:54:26.958513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.965 ms 00:27:38.484 [2024-12-06 15:54:26.958568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.958783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.958838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.484 [2024-12-06 15:54:26.959130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:38.484 [2024-12-06 15:54:26.959182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.959309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.959374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.484 [2024-12-06 15:54:26.959414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:38.484 [2024-12-06 15:54:26.959543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.959700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.484 [2024-12-06 15:54:26.962507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.962688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.484 [2024-12-06 15:54:26.962789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:27:38.484 [2024-12-06 15:54:26.962838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.963061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.963106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.484 [2024-12-06 15:54:26.963119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:38.484 [2024-12-06 15:54:26.963135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.963190] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:38.484 [2024-12-06 15:54:26.963374] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.484 [2024-12-06 15:54:26.963393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.484 [2024-12-06 15:54:26.963410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.484 [2024-12-06 15:54:26.963424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.484 [2024-12-06 15:54:26.963451] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.484 [2024-12-06 15:54:26.963463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.484 [2024-12-06 15:54:26.963479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.484 [2024-12-06 15:54:26.963496] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.484 [2024-12-06 15:54:26.963509] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.484 [2024-12-06 15:54:26.963521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.963534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.484 [2024-12-06 15:54:26.963544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:27:38.484 [2024-12-06 15:54:26.963557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.963642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.484 [2024-12-06 15:54:26.963661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.484 [2024-12-06 15:54:26.963672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:38.484 [2024-12-06 15:54:26.963696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.484 [2024-12-06 15:54:26.963793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.484 [2024-12-06 15:54:26.963819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.484 [2024-12-06 15:54:26.963831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.484 [2024-12-06 15:54:26.963845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.484 [2024-12-06 15:54:26.963855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.484 [2024-12-06 15:54:26.963867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.484 [2024-12-06 15:54:26.963877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.484 [2024-12-06 15:54:26.963888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.484 [2024-12-06 15:54:26.963898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.484 [2024-12-06 15:54:26.963909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.484 [2024-12-06 15:54:26.963919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.484 [2024-12-06 15:54:26.963930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.484 [2024-12-06 15:54:26.963940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.484 [2024-12-06 15:54:26.963971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.485 [2024-12-06 15:54:26.964125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.485 [2024-12-06 15:54:26.964181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.485 [2024-12-06 15:54:26.964217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.485 [2024-12-06 15:54:26.964254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.485 [2024-12-06 15:54:26.964287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.485 [2024-12-06 15:54:26.964399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.485 [2024-12-06 15:54:26.964431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.485 [2024-12-06 15:54:26.964467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.485 [2024-12-06 15:54:26.964499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.485 [2024-12-06 15:54:26.964591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.485 [2024-12-06 15:54:26.964756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.485 [2024-12-06 15:54:26.964807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.485 [2024-12-06 15:54:26.964899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.485 [2024-12-06 15:54:26.964976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.485 [2024-12-06 15:54:26.965111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.485 [2024-12-06 15:54:26.965163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.485 [2024-12-06 15:54:26.965198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.485 [2024-12-06 15:54:26.965294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.485 [2024-12-06 15:54:26.965316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.485 [2024-12-06 15:54:26.965341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.485 [2024-12-06 15:54:26.965352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.485 [2024-12-06 15:54:26.965364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.485 [2024-12-06 15:54:26.965374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.485 [2024-12-06 15:54:26.965389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.485 [2024-12-06 15:54:26.965400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.485 [2024-12-06 15:54:26.965413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.485 [2024-12-06 15:54:26.965423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.485 [2024-12-06 15:54:26.965436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.485 [2024-12-06 15:54:26.965446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.485 [2024-12-06 15:54:26.965458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.485 [2024-12-06 15:54:26.965480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.485 [2024-12-06 15:54:26.965499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.485 [2024-12-06 15:54:26.965510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.485 [2024-12-06 15:54:26.965523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.485 [2024-12-06 15:54:26.965534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.485 [2024-12-06 15:54:26.965547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.485 [2024-12-06 15:54:26.965557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.485 [2024-12-06 15:54:26.965583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.485 [2024-12-06 15:54:26.965593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.485 [2024-12-06 15:54:26.965608] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.485 [2024-12-06 15:54:26.965624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.485 [2024-12-06 15:54:26.965649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.485 [2024-12-06 15:54:26.965662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.485 [2024-12-06 15:54:26.965672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.485 [2024-12-06 15:54:26.965685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.485 [2024-12-06 15:54:26.965695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.485 [2024-12-06 15:54:26.965711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.485 [2024-12-06 15:54:26.965722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.485 [2024-12-06 15:54:26.965735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.485 [2024-12-06 15:54:26.965746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.485 [2024-12-06 15:54:26.965822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.485 [2024-12-06 15:54:26.965834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.485 [2024-12-06 15:54:26.965860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.485 [2024-12-06 15:54:26.965873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.485 [2024-12-06 15:54:26.965883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.485 [2024-12-06 15:54:26.965897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.485 [2024-12-06 15:54:26.965908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.485 [2024-12-06 15:54:26.965924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:27:38.485 [2024-12-06 15:54:26.965934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.485 [2024-12-06 15:54:26.966032] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:38.485 [2024-12-06 15:54:26.966050] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:41.774 [2024-12-06 15:54:30.219970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.220057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:41.774 [2024-12-06 15:54:30.220083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3253.915 ms 00:27:41.774 [2024-12-06 15:54:30.220095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.236034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.236088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:41.774 [2024-12-06 15:54:30.236110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.815 ms 00:27:41.774 [2024-12-06 15:54:30.236122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.236237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.236252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:41.774 [2024-12-06 15:54:30.236281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:41.774 [2024-12-06 15:54:30.236292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.251534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.251580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:41.774 [2024-12-06 15:54:30.251600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.162 ms 00:27:41.774 [2024-12-06 15:54:30.251612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.251657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.251670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:41.774 [2024-12-06 15:54:30.251684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:41.774 [2024-12-06 15:54:30.251695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.252292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.252316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:41.774 [2024-12-06 15:54:30.252332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:27:41.774 [2024-12-06 15:54:30.252343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.252491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.774 [2024-12-06 15:54:30.252507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:41.774 [2024-12-06 15:54:30.252521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:27:41.774 [2024-12-06 15:54:30.252531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.774 [2024-12-06 15:54:30.263448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.263687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:41.775 [2024-12-06 15:54:30.263722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.881 ms 00:27:41.775 [2024-12-06 15:54:30.263735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.285753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:41.775 [2024-12-06 15:54:30.291371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.291417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:41.775 [2024-12-06 15:54:30.291434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.528 ms 00:27:41.775 [2024-12-06 15:54:30.291448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.366007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.366114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:41.775 [2024-12-06 15:54:30.366139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.522 ms 00:27:41.775 [2024-12-06 15:54:30.366156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.366389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.366411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:41.775 [2024-12-06 15:54:30.366424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:27:41.775 [2024-12-06 15:54:30.366438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.370416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.370475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:41.775 [2024-12-06 15:54:30.370492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:27:41.775 [2024-12-06 15:54:30.370510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.373582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.373780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:41.775 [2024-12-06 15:54:30.373807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:27:41.775 [2024-12-06 15:54:30.373823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.374234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.374256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:41.775 [2024-12-06 15:54:30.374270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:27:41.775 [2024-12-06 15:54:30.374301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.407454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.407505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:41.775 [2024-12-06 15:54:30.407525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.128 ms 00:27:41.775 [2024-12-06 15:54:30.407539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.412306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.412348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:41.775 [2024-12-06 15:54:30.412364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:27:41.775 [2024-12-06 15:54:30.412377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.415680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.415736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:41.775 [2024-12-06 15:54:30.415751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:27:41.775 [2024-12-06 15:54:30.415764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.419326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.419369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:41.775 [2024-12-06 15:54:30.419384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:27:41.775 [2024-12-06 15:54:30.419399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.419444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.419473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:41.775 [2024-12-06 15:54:30.419485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:41.775 [2024-12-06 15:54:30.419497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.419592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.775 [2024-12-06 15:54:30.419612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:41.775 [2024-12-06 15:54:30.419624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:41.775 [2024-12-06 15:54:30.419640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.775 [2024-12-06 15:54:30.421111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3477.557 ms, result 0 00:27:41.775 { 00:27:41.775 "name": "ftl0", 00:27:41.775 "uuid": "b4e0e507-6c02-47f6-bfa2-3cd271d0b40b" 00:27:41.775 } 00:27:41.775 15:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:41.775 15:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:42.342 15:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:42.342 15:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:42.342 15:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:42.600 /dev/nbd0 00:27:42.600 15:54:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:42.600 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:42.600 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:42.601 1+0 records in 00:27:42.601 1+0 records out 00:27:42.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270556 s, 15.1 MB/s 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:42.601 15:54:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:42.601 [2024-12-06 15:54:31.175213] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:27:42.601 [2024-12-06 15:54:31.176047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91969 ] 00:27:42.859 [2024-12-06 15:54:31.331625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.859 [2024-12-06 15:54:31.368216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.795  [2024-12-06T15:54:33.860Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-06T15:54:34.794Z] Copying: 419/1024 [MB] (210 MBps) [2024-12-06T15:54:35.725Z] Copying: 600/1024 [MB] (180 MBps) [2024-12-06T15:54:36.658Z] Copying: 790/1024 [MB] (190 MBps) [2024-12-06T15:54:36.917Z] Copying: 976/1024 [MB] (186 MBps) [2024-12-06T15:54:37.175Z] Copying: 1024/1024 [MB] (average 194 MBps) 00:27:48.482 00:27:48.482 15:54:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:50.379 15:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:50.379 [2024-12-06 15:54:38.934997] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:27:50.379 [2024-12-06 15:54:38.935183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92046 ] 00:27:50.637 [2024-12-06 15:54:39.097179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.637 [2024-12-06 15:54:39.141397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.573  [2024-12-06T15:54:41.643Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-06T15:54:42.579Z] Copying: 29/1024 [MB] (15 MBps) [2024-12-06T15:54:43.515Z] Copying: 44/1024 [MB] (14 MBps) [2024-12-06T15:54:44.451Z] Copying: 59/1024 [MB] (15 MBps) [2024-12-06T15:54:45.386Z] Copying: 74/1024 [MB] (14 MBps) [2024-12-06T15:54:46.319Z] Copying: 89/1024 [MB] (15 MBps) [2024-12-06T15:54:47.255Z] Copying: 105/1024 [MB] (15 MBps) [2024-12-06T15:54:48.634Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-06T15:54:49.571Z] Copying: 135/1024 [MB] (15 MBps) [2024-12-06T15:54:50.508Z] Copying: 150/1024 [MB] (15 MBps) [2024-12-06T15:54:51.445Z] Copying: 165/1024 [MB] (14 MBps) [2024-12-06T15:54:52.382Z] Copying: 180/1024 [MB] (15 MBps) [2024-12-06T15:54:53.321Z] Copying: 196/1024 [MB] (15 MBps) [2024-12-06T15:54:54.297Z] Copying: 210/1024 [MB] (14 MBps) [2024-12-06T15:54:55.672Z] Copying: 226/1024 [MB] (15 MBps) [2024-12-06T15:54:56.609Z] Copying: 241/1024 [MB] (15 MBps) [2024-12-06T15:54:57.546Z] Copying: 257/1024 [MB] (15 MBps) [2024-12-06T15:54:58.482Z] Copying: 272/1024 [MB] (15 MBps) [2024-12-06T15:54:59.417Z] Copying: 288/1024 [MB] (15 MBps) [2024-12-06T15:55:00.354Z] Copying: 303/1024 [MB] (15 MBps) [2024-12-06T15:55:01.292Z] Copying: 318/1024 [MB] (15 MBps) [2024-12-06T15:55:02.671Z] Copying: 333/1024 [MB] (15 MBps) [2024-12-06T15:55:03.609Z] Copying: 349/1024 [MB] (15 MBps) [2024-12-06T15:55:04.547Z] Copying: 364/1024 [MB] (15 MBps) [2024-12-06T15:55:05.484Z] Copying: 380/1024 [MB] (15 MBps) [2024-12-06T15:55:06.419Z] Copying: 395/1024 [MB] (15 MBps) [2024-12-06T15:55:07.356Z] Copying: 410/1024 [MB] (15 MBps) [2024-12-06T15:55:08.293Z] Copying: 426/1024 [MB] (15 MBps) [2024-12-06T15:55:09.671Z] Copying: 441/1024 [MB] (15 MBps) [2024-12-06T15:55:10.608Z] Copying: 456/1024 [MB] (15 MBps) [2024-12-06T15:55:11.545Z] Copying: 472/1024 [MB] (15 MBps) [2024-12-06T15:55:12.478Z] Copying: 487/1024 [MB] (15 MBps) [2024-12-06T15:55:13.413Z] Copying: 503/1024 [MB] (15 MBps) [2024-12-06T15:55:14.349Z] Copying: 518/1024 [MB] (15 MBps) [2024-12-06T15:55:15.286Z] Copying: 533/1024 [MB] (15 MBps) [2024-12-06T15:55:16.672Z] Copying: 548/1024 [MB] (15 MBps) [2024-12-06T15:55:17.609Z] Copying: 564/1024 [MB] (15 MBps) [2024-12-06T15:55:18.546Z] Copying: 579/1024 [MB] (15 MBps) [2024-12-06T15:55:19.483Z] Copying: 594/1024 [MB] (15 MBps) [2024-12-06T15:55:20.420Z] Copying: 609/1024 [MB] (15 MBps) [2024-12-06T15:55:21.357Z] Copying: 624/1024 [MB] (15 MBps) [2024-12-06T15:55:22.296Z] Copying: 640/1024 [MB] (15 MBps) [2024-12-06T15:55:23.282Z] Copying: 655/1024 [MB] (15 MBps) [2024-12-06T15:55:24.658Z] Copying: 670/1024 [MB] (15 MBps) [2024-12-06T15:55:25.591Z] Copying: 686/1024 [MB] (15 MBps) [2024-12-06T15:55:26.526Z] Copying: 701/1024 [MB] (15 MBps) [2024-12-06T15:55:27.464Z] Copying: 716/1024 [MB] (15 MBps) [2024-12-06T15:55:28.401Z] Copying: 732/1024 [MB] (15 MBps) [2024-12-06T15:55:29.339Z] Copying: 747/1024 [MB] (15 MBps) [2024-12-06T15:55:30.275Z] Copying: 762/1024 [MB] (15 MBps) [2024-12-06T15:55:31.650Z] Copying: 777/1024 [MB] (15 MBps) [2024-12-06T15:55:32.588Z] Copying: 792/1024 [MB] (15 MBps) [2024-12-06T15:55:33.523Z] Copying: 808/1024 [MB] (15 MBps) [2024-12-06T15:55:34.456Z] Copying: 823/1024 [MB] (15 MBps) [2024-12-06T15:55:35.390Z] Copying: 838/1024 [MB] (15 MBps) [2024-12-06T15:55:36.325Z] Copying: 854/1024 [MB] (15 MBps) [2024-12-06T15:55:37.262Z] Copying: 869/1024 [MB] (15 MBps) [2024-12-06T15:55:38.639Z] Copying: 885/1024 [MB] (15 MBps) [2024-12-06T15:55:39.576Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-06T15:55:40.513Z] Copying: 915/1024 [MB] (14 MBps) [2024-12-06T15:55:41.449Z] Copying: 930/1024 [MB] (14 MBps) [2024-12-06T15:55:42.385Z] Copying: 945/1024 [MB] (14 MBps) [2024-12-06T15:55:43.322Z] Copying: 960/1024 [MB] (15 MBps) [2024-12-06T15:55:44.260Z] Copying: 975/1024 [MB] (15 MBps) [2024-12-06T15:55:45.637Z] Copying: 990/1024 [MB] (14 MBps) [2024-12-06T15:55:46.573Z] Copying: 1005/1024 [MB] (15 MBps) [2024-12-06T15:55:46.573Z] Copying: 1021/1024 [MB] (15 MBps) [2024-12-06T15:55:46.832Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:28:58.139 00:28:58.139 15:55:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:58.139 15:55:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:58.397 15:55:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:58.655 [2024-12-06 15:55:47.244137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.655 [2024-12-06 15:55:47.244194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:58.655 [2024-12-06 15:55:47.244220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:58.655 [2024-12-06 15:55:47.244234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.655 [2024-12-06 15:55:47.244276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:58.655 [2024-12-06 15:55:47.245144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.655 [2024-12-06 15:55:47.245176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:58.655 [2024-12-06 15:55:47.245191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:28:58.655 [2024-12-06 15:55:47.245206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.655 [2024-12-06 15:55:47.247216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.655 [2024-12-06 15:55:47.247266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:58.655 [2024-12-06 15:55:47.247285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:28:58.655 [2024-12-06 15:55:47.247301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.655 [2024-12-06 15:55:47.264563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.655 [2024-12-06 15:55:47.264614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:58.655 [2024-12-06 15:55:47.264644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:28:58.655 [2024-12-06 15:55:47.264661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.655 [2024-12-06 15:55:47.269733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.269972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:58.656 [2024-12-06 15:55:47.270000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.027 ms 00:28:58.656 [2024-12-06 15:55:47.270019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.271231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.271282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:58.656 [2024-12-06 15:55:47.271300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:28:58.656 [2024-12-06 15:55:47.271316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.276314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.276379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:58.656 [2024-12-06 15:55:47.276398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:28:58.656 [2024-12-06 15:55:47.276415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.276539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.276565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:58.656 [2024-12-06 15:55:47.276579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:58.656 [2024-12-06 15:55:47.276609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.278743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.278791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:58.656 [2024-12-06 15:55:47.278808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:28:58.656 [2024-12-06 15:55:47.278823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.280403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.280602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:58.656 [2024-12-06 15:55:47.280644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:28:58.656 [2024-12-06 15:55:47.280664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.281925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.281990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:58.656 [2024-12-06 15:55:47.282008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:28:58.656 [2024-12-06 15:55:47.282023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.283228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.656 [2024-12-06 15:55:47.283274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:58.656 [2024-12-06 15:55:47.283291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:28:58.656 [2024-12-06 15:55:47.283309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.656 [2024-12-06 15:55:47.283351] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:58.656 [2024-12-06 15:55:47.283381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.283922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.284798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:58.656 [2024-12-06 15:55:47.285245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:58.657 [2024-12-06 15:55:47.285869] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:58.657 [2024-12-06 15:55:47.285882] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4e0e507-6c02-47f6-bfa2-3cd271d0b40b 00:28:58.657 [2024-12-06 15:55:47.285899] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:58.657 [2024-12-06 15:55:47.285912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:58.657 [2024-12-06 15:55:47.285927] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:58.657 [2024-12-06 15:55:47.285939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:58.657 [2024-12-06 15:55:47.285972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:58.657 [2024-12-06 15:55:47.285988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:58.657 [2024-12-06 15:55:47.286003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:58.657 [2024-12-06 15:55:47.286014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:58.657 [2024-12-06 15:55:47.286041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:58.657 [2024-12-06 15:55:47.286055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.657 [2024-12-06 15:55:47.286082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:58.657 [2024-12-06 15:55:47.286096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:28:58.657 [2024-12-06 15:55:47.286117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.288523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.657 [2024-12-06 15:55:47.288562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:58.657 [2024-12-06 15:55:47.288578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:28:58.657 [2024-12-06 15:55:47.288606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.288800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.657 [2024-12-06 15:55:47.288825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:58.657 [2024-12-06 15:55:47.288842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:28:58.657 [2024-12-06 15:55:47.288857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.297468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.297518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.657 [2024-12-06 15:55:47.297537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.297553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.297615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.297638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.657 [2024-12-06 15:55:47.297655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.297676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.297777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.297807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.657 [2024-12-06 15:55:47.297822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.297838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.297867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.297887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.657 [2024-12-06 15:55:47.297902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.297922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.312604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.312694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.657 [2024-12-06 15:55:47.312715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.312732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.324850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.324913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.657 [2024-12-06 15:55:47.324952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.324974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.325094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.325138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.657 [2024-12-06 15:55:47.325154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.325170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.325327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.325353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.657 [2024-12-06 15:55:47.325367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.325384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.325491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.325517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.657 [2024-12-06 15:55:47.325531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.657 [2024-12-06 15:55:47.325547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.657 [2024-12-06 15:55:47.325607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.657 [2024-12-06 15:55:47.325641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:58.657 [2024-12-06 15:55:47.325657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.658 [2024-12-06 15:55:47.325673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.658 [2024-12-06 15:55:47.325736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.658 [2024-12-06 15:55:47.325760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.658 [2024-12-06 15:55:47.325774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.658 [2024-12-06 15:55:47.325790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.658 [2024-12-06 15:55:47.325855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.658 [2024-12-06 15:55:47.325879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.658 [2024-12-06 15:55:47.325894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.658 [2024-12-06 15:55:47.325928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.658 [2024-12-06 15:55:47.326139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.954 ms, result 0 00:28:58.658 true 00:28:58.658 15:55:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91821 00:28:58.658 15:55:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91821 00:28:58.915 15:55:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:58.915 [2024-12-06 15:55:47.417846] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:58.915 [2024-12-06 15:55:47.417989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92733 ] 00:28:58.915 [2024-12-06 15:55:47.562857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.915 [2024-12-06 15:55:47.598764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.289  [2024-12-06T15:55:49.917Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-06T15:55:50.858Z] Copying: 390/1024 [MB] (195 MBps) [2024-12-06T15:55:51.848Z] Copying: 582/1024 [MB] (191 MBps) [2024-12-06T15:55:52.785Z] Copying: 773/1024 [MB] (191 MBps) [2024-12-06T15:55:53.044Z] Copying: 965/1024 [MB] (191 MBps) [2024-12-06T15:55:53.302Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:29:04.609 00:29:04.609 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91821 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:04.609 15:55:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:04.867 [2024-12-06 15:55:53.402556] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:04.867 [2024-12-06 15:55:53.402769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92800 ] 00:29:05.126 [2024-12-06 15:55:53.559911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.126 [2024-12-06 15:55:53.600473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.126 [2024-12-06 15:55:53.738688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:05.126 [2024-12-06 15:55:53.738797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:05.126 [2024-12-06 15:55:53.804158] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:05.126 [2024-12-06 15:55:53.804784] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:05.126 [2024-12-06 15:55:53.805026] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:05.695 [2024-12-06 15:55:54.078544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.695 [2024-12-06 15:55:54.078750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:05.695 [2024-12-06 15:55:54.078782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:05.695 [2024-12-06 15:55:54.078815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.695 [2024-12-06 15:55:54.078903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.695 [2024-12-06 15:55:54.078926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.695 [2024-12-06 15:55:54.078978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:05.695 [2024-12-06 15:55:54.078991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.695 [2024-12-06 15:55:54.079052] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:05.695 [2024-12-06 15:55:54.079302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:05.696 [2024-12-06 15:55:54.079329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.079341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.696 [2024-12-06 15:55:54.079354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:05.696 [2024-12-06 15:55:54.079366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.081253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:05.696 [2024-12-06 15:55:54.084108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.084151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:05.696 [2024-12-06 15:55:54.084168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:29:05.696 [2024-12-06 15:55:54.084187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.084271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.084292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:05.696 [2024-12-06 15:55:54.084305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:05.696 [2024-12-06 15:55:54.084317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.092927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.092991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.696 [2024-12-06 15:55:54.093030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.535 ms 00:29:05.696 [2024-12-06 15:55:54.093053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.093175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.093197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.696 [2024-12-06 15:55:54.093211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:05.696 [2024-12-06 15:55:54.093229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.093291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.093311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:05.696 [2024-12-06 15:55:54.093324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:05.696 [2024-12-06 15:55:54.093336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.093402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:05.696 [2024-12-06 15:55:54.095387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.095427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.696 [2024-12-06 15:55:54.095444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:29:05.696 [2024-12-06 15:55:54.095464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.095514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.095539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:05.696 [2024-12-06 15:55:54.095554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:05.696 [2024-12-06 15:55:54.095565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.095607] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:05.696 [2024-12-06 15:55:54.095643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:05.696 [2024-12-06 15:55:54.095699] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:05.696 [2024-12-06 15:55:54.095735] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:05.696 [2024-12-06 15:55:54.095832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:05.696 [2024-12-06 15:55:54.095849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:05.696 [2024-12-06 15:55:54.095864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:05.696 [2024-12-06 15:55:54.095889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:05.696 [2024-12-06 15:55:54.095906] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:05.696 [2024-12-06 15:55:54.095919] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:05.696 [2024-12-06 15:55:54.095966] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:05.696 [2024-12-06 15:55:54.095991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:05.696 [2024-12-06 15:55:54.096003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:05.696 [2024-12-06 15:55:54.096045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.096061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:05.696 [2024-12-06 15:55:54.096074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:29:05.696 [2024-12-06 15:55:54.096097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.096182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.696 [2024-12-06 15:55:54.096201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:05.696 [2024-12-06 15:55:54.096227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:05.696 [2024-12-06 15:55:54.096239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.696 [2024-12-06 15:55:54.096358] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:05.696 [2024-12-06 15:55:54.096388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:05.696 [2024-12-06 15:55:54.096402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:05.696 [2024-12-06 15:55:54.096450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:05.696 [2024-12-06 15:55:54.096492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:05.696 [2024-12-06 15:55:54.096516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:05.696 [2024-12-06 15:55:54.096527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:05.696 [2024-12-06 15:55:54.096537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:05.696 [2024-12-06 15:55:54.096548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:05.696 [2024-12-06 15:55:54.096559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:05.696 [2024-12-06 15:55:54.096570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:05.696 [2024-12-06 15:55:54.096601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:05.696 [2024-12-06 15:55:54.096649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:05.696 [2024-12-06 15:55:54.096697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:05.696 [2024-12-06 15:55:54.096737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:05.696 [2024-12-06 15:55:54.096768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:05.696 [2024-12-06 15:55:54.096799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:05.696 [2024-12-06 15:55:54.096820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:05.696 [2024-12-06 15:55:54.096831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:05.696 [2024-12-06 15:55:54.096841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:05.696 [2024-12-06 15:55:54.096851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:05.696 [2024-12-06 15:55:54.096862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:05.696 [2024-12-06 15:55:54.096877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:05.696 [2024-12-06 15:55:54.096901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:05.696 [2024-12-06 15:55:54.096912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:05.696 [2024-12-06 15:55:54.096934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:05.696 [2024-12-06 15:55:54.096964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:05.696 [2024-12-06 15:55:54.096976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:05.696 [2024-12-06 15:55:54.096987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:05.696 [2024-12-06 15:55:54.096999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:05.697 [2024-12-06 15:55:54.097016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:05.697 [2024-12-06 15:55:54.097027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:05.697 [2024-12-06 15:55:54.097038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:05.697 [2024-12-06 15:55:54.097048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:05.697 [2024-12-06 15:55:54.097061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:05.697 [2024-12-06 15:55:54.097075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:05.697 [2024-12-06 15:55:54.097106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:05.697 [2024-12-06 15:55:54.097118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:05.697 [2024-12-06 15:55:54.097128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:05.697 [2024-12-06 15:55:54.097140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:05.697 [2024-12-06 15:55:54.097166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:05.697 [2024-12-06 15:55:54.097178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:05.697 [2024-12-06 15:55:54.097189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:05.697 [2024-12-06 15:55:54.097200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:05.697 [2024-12-06 15:55:54.097211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:05.697 [2024-12-06 15:55:54.097267] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:05.697 [2024-12-06 15:55:54.097286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:05.697 [2024-12-06 15:55:54.097330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:05.697 [2024-12-06 15:55:54.097342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:05.697 [2024-12-06 15:55:54.097353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:05.697 [2024-12-06 15:55:54.097367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.097378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:05.697 [2024-12-06 15:55:54.097391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:29:05.697 [2024-12-06 15:55:54.097402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.115357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.115715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.697 [2024-12-06 15:55:54.115766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.864 ms 00:29:05.697 [2024-12-06 15:55:54.115797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.115918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.115945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:05.697 [2024-12-06 15:55:54.115990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:05.697 [2024-12-06 15:55:54.116004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.155141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.155208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.697 [2024-12-06 15:55:54.155229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.037 ms 00:29:05.697 [2024-12-06 15:55:54.155242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.155319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.155345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.697 [2024-12-06 15:55:54.155359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:05.697 [2024-12-06 15:55:54.155383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.156063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.156092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.697 [2024-12-06 15:55:54.156109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:29:05.697 [2024-12-06 15:55:54.156122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.156293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.156319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.697 [2024-12-06 15:55:54.156333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:29:05.697 [2024-12-06 15:55:54.156345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.166185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.166232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.697 [2024-12-06 15:55:54.166249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.797 ms 00:29:05.697 [2024-12-06 15:55:54.166269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.169582] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:05.697 [2024-12-06 15:55:54.169636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:05.697 [2024-12-06 15:55:54.169655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.169667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:05.697 [2024-12-06 15:55:54.169680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:29:05.697 [2024-12-06 15:55:54.169693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.183134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.183178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:05.697 [2024-12-06 15:55:54.183196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.395 ms 00:29:05.697 [2024-12-06 15:55:54.183208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.185078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.185113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:05.697 [2024-12-06 15:55:54.185128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:29:05.697 [2024-12-06 15:55:54.185140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.186682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.186723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:05.697 [2024-12-06 15:55:54.186740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:29:05.697 [2024-12-06 15:55:54.186752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.187161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.187185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:05.697 [2024-12-06 15:55:54.187199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:29:05.697 [2024-12-06 15:55:54.187210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.212090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.212164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:05.697 [2024-12-06 15:55:54.212186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.836 ms 00:29:05.697 [2024-12-06 15:55:54.212199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.219303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:05.697 [2024-12-06 15:55:54.222136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.222172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:05.697 [2024-12-06 15:55:54.222210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.874 ms 00:29:05.697 [2024-12-06 15:55:54.222223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.222307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.222334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:05.697 [2024-12-06 15:55:54.222359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:05.697 [2024-12-06 15:55:54.222377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.222518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.222539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:05.697 [2024-12-06 15:55:54.222553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:05.697 [2024-12-06 15:55:54.222565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.697 [2024-12-06 15:55:54.222618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.697 [2024-12-06 15:55:54.222641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:05.697 [2024-12-06 15:55:54.222654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:05.697 [2024-12-06 15:55:54.222666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.698 [2024-12-06 15:55:54.222724] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:05.698 [2024-12-06 15:55:54.222744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.698 [2024-12-06 15:55:54.222776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:05.698 [2024-12-06 15:55:54.222789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:05.698 [2024-12-06 15:55:54.222809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.698 [2024-12-06 15:55:54.227050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.698 [2024-12-06 15:55:54.227087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:05.698 [2024-12-06 15:55:54.227104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:29:05.698 [2024-12-06 15:55:54.227124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.698 [2024-12-06 15:55:54.227225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.698 [2024-12-06 15:55:54.227248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:05.698 [2024-12-06 15:55:54.227261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:05.698 [2024-12-06 15:55:54.227273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.698 [2024-12-06 15:55:54.229107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.806 ms, result 0 00:29:06.633  [2024-12-06T15:55:56.256Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T15:55:57.631Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-06T15:55:58.567Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-06T15:55:59.503Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-06T15:56:00.439Z] Copying: 112/1024 [MB] (22 MBps) [2024-12-06T15:56:01.375Z] Copying: 134/1024 [MB] (21 MBps) [2024-12-06T15:56:02.311Z] Copying: 156/1024 [MB] (21 MBps) [2024-12-06T15:56:03.248Z] Copying: 178/1024 [MB] (22 MBps) [2024-12-06T15:56:04.621Z] Copying: 200/1024 [MB] (22 MBps) [2024-12-06T15:56:05.555Z] Copying: 221/1024 [MB] (21 MBps) [2024-12-06T15:56:06.489Z] Copying: 243/1024 [MB] (21 MBps) [2024-12-06T15:56:07.423Z] Copying: 265/1024 [MB] (21 MBps) [2024-12-06T15:56:08.360Z] Copying: 287/1024 [MB] (21 MBps) [2024-12-06T15:56:09.297Z] Copying: 308/1024 [MB] (21 MBps) [2024-12-06T15:56:10.675Z] Copying: 330/1024 [MB] (21 MBps) [2024-12-06T15:56:11.243Z] Copying: 352/1024 [MB] (21 MBps) [2024-12-06T15:56:12.620Z] Copying: 374/1024 [MB] (22 MBps) [2024-12-06T15:56:13.556Z] Copying: 396/1024 [MB] (21 MBps) [2024-12-06T15:56:14.491Z] Copying: 418/1024 [MB] (21 MBps) [2024-12-06T15:56:15.424Z] Copying: 440/1024 [MB] (22 MBps) [2024-12-06T15:56:16.356Z] Copying: 462/1024 [MB] (22 MBps) [2024-12-06T15:56:17.288Z] Copying: 484/1024 [MB] (22 MBps) [2024-12-06T15:56:18.665Z] Copying: 506/1024 [MB] (21 MBps) [2024-12-06T15:56:19.614Z] Copying: 527/1024 [MB] (20 MBps) [2024-12-06T15:56:20.562Z] Copying: 548/1024 [MB] (21 MBps) [2024-12-06T15:56:21.496Z] Copying: 570/1024 [MB] (22 MBps) [2024-12-06T15:56:22.433Z] Copying: 593/1024 [MB] (22 MBps) [2024-12-06T15:56:23.371Z] Copying: 614/1024 [MB] (21 MBps) [2024-12-06T15:56:24.308Z] Copying: 637/1024 [MB] (22 MBps) [2024-12-06T15:56:25.260Z] Copying: 658/1024 [MB] (21 MBps) [2024-12-06T15:56:26.636Z] Copying: 680/1024 [MB] (21 MBps) [2024-12-06T15:56:27.568Z] Copying: 701/1024 [MB] (20 MBps) [2024-12-06T15:56:28.501Z] Copying: 724/1024 [MB] (22 MBps) [2024-12-06T15:56:29.434Z] Copying: 746/1024 [MB] (22 MBps) [2024-12-06T15:56:30.367Z] Copying: 768/1024 [MB] (21 MBps) [2024-12-06T15:56:31.301Z] Copying: 790/1024 [MB] (22 MBps) [2024-12-06T15:56:32.672Z] Copying: 813/1024 [MB] (22 MBps) [2024-12-06T15:56:33.607Z] Copying: 835/1024 [MB] (22 MBps) [2024-12-06T15:56:34.542Z] Copying: 858/1024 [MB] (22 MBps) [2024-12-06T15:56:35.583Z] Copying: 880/1024 [MB] (22 MBps) [2024-12-06T15:56:36.532Z] Copying: 903/1024 [MB] (22 MBps) [2024-12-06T15:56:37.466Z] Copying: 926/1024 [MB] (22 MBps) [2024-12-06T15:56:38.401Z] Copying: 948/1024 [MB] (22 MBps) [2024-12-06T15:56:39.338Z] Copying: 970/1024 [MB] (22 MBps) [2024-12-06T15:56:40.276Z] Copying: 993/1024 [MB] (22 MBps) [2024-12-06T15:56:41.656Z] Copying: 1016/1024 [MB] (22 MBps) [2024-12-06T15:56:41.656Z] Copying: 1048228/1048576 [kB] (7564 kBps) [2024-12-06T15:56:41.656Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-06 15:56:41.628507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.963 [2024-12-06 15:56:41.628810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:52.963 [2024-12-06 15:56:41.628845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:52.963 [2024-12-06 15:56:41.628881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.963 [2024-12-06 15:56:41.631154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:52.963 [2024-12-06 15:56:41.636006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.963 [2024-12-06 15:56:41.636048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:52.963 [2024-12-06 15:56:41.636065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:29:52.963 [2024-12-06 15:56:41.636075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.963 [2024-12-06 15:56:41.647065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.963 [2024-12-06 15:56:41.647099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:52.963 [2024-12-06 15:56:41.647114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.237 ms 00:29:52.963 [2024-12-06 15:56:41.647123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.223 [2024-12-06 15:56:41.668872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.223 [2024-12-06 15:56:41.668921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:53.223 [2024-12-06 15:56:41.668948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.729 ms 00:29:53.223 [2024-12-06 15:56:41.668963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.223 [2024-12-06 15:56:41.674235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.223 [2024-12-06 15:56:41.674261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:53.223 [2024-12-06 15:56:41.674273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.205 ms 00:29:53.223 [2024-12-06 15:56:41.674282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.223 [2024-12-06 15:56:41.675711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.223 [2024-12-06 15:56:41.675758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:53.223 [2024-12-06 15:56:41.675770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:29:53.223 [2024-12-06 15:56:41.675780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.223 [2024-12-06 15:56:41.680077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.223 [2024-12-06 15:56:41.680107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:53.223 [2024-12-06 15:56:41.680120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:29:53.223 [2024-12-06 15:56:41.680130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.223 [2024-12-06 15:56:41.798868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.223 [2024-12-06 15:56:41.798938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:53.224 [2024-12-06 15:56:41.798982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.704 ms 00:29:53.224 [2024-12-06 15:56:41.799007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.224 [2024-12-06 15:56:41.800978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.224 [2024-12-06 15:56:41.801022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:53.224 [2024-12-06 15:56:41.801035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:29:53.224 [2024-12-06 15:56:41.801058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.224 [2024-12-06 15:56:41.802645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.224 [2024-12-06 15:56:41.802703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:53.224 [2024-12-06 15:56:41.802716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:29:53.224 [2024-12-06 15:56:41.802726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.224 [2024-12-06 15:56:41.803999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.224 [2024-12-06 15:56:41.804057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:53.224 [2024-12-06 15:56:41.804070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:29:53.224 [2024-12-06 15:56:41.804079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.224 [2024-12-06 15:56:41.805332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.224 [2024-12-06 15:56:41.805361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:53.224 [2024-12-06 15:56:41.805372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:29:53.224 [2024-12-06 15:56:41.805381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.224 [2024-12-06 15:56:41.805411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:53.224 [2024-12-06 15:56:41.805436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127232 / 261120 wr_cnt: 1 state: open 00:29:53.224 [2024-12-06 15:56:41.805449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.805987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:53.224 [2024-12-06 15:56:41.806352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:53.225 [2024-12-06 15:56:41.806668] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:53.225 [2024-12-06 15:56:41.806692] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4e0e507-6c02-47f6-bfa2-3cd271d0b40b 00:29:53.225 [2024-12-06 15:56:41.806704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127232 00:29:53.225 [2024-12-06 15:56:41.806721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128192 00:29:53.225 [2024-12-06 15:56:41.806731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127232 00:29:53.225 [2024-12-06 15:56:41.806742] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:29:53.225 [2024-12-06 15:56:41.806751] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:53.225 [2024-12-06 15:56:41.806769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:53.225 [2024-12-06 15:56:41.806779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:53.225 [2024-12-06 15:56:41.806787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:53.225 [2024-12-06 15:56:41.806796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:53.225 [2024-12-06 15:56:41.806806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.225 [2024-12-06 15:56:41.806817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:53.225 [2024-12-06 15:56:41.806831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:29:53.225 [2024-12-06 15:56:41.806841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.809573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.225 [2024-12-06 15:56:41.809599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:53.225 [2024-12-06 15:56:41.809611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:29:53.225 [2024-12-06 15:56:41.809621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.809789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.225 [2024-12-06 15:56:41.809810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:53.225 [2024-12-06 15:56:41.809821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:29:53.225 [2024-12-06 15:56:41.809831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.819413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.819453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.225 [2024-12-06 15:56:41.819467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.819478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.819540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.819553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.225 [2024-12-06 15:56:41.819565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.819575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.819645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.819663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.225 [2024-12-06 15:56:41.819685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.819696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.819724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.819741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.225 [2024-12-06 15:56:41.819752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.819762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.835978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.836060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.225 [2024-12-06 15:56:41.836087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.836099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.847703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.847750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.225 [2024-12-06 15:56:41.847765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.847775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.847855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.847871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.225 [2024-12-06 15:56:41.847882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.847892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.847933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.848008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.225 [2024-12-06 15:56:41.848027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.848049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.848149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.848168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.225 [2024-12-06 15:56:41.848181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.848192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.848238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.848255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:53.225 [2024-12-06 15:56:41.848268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.848286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.848364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.848393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.225 [2024-12-06 15:56:41.848405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.848427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.848477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.225 [2024-12-06 15:56:41.848492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.225 [2024-12-06 15:56:41.848508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.225 [2024-12-06 15:56:41.848518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.225 [2024-12-06 15:56:41.848709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 221.010 ms, result 0 00:29:54.160 00:29:54.161 00:29:54.161 15:56:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:56.061 15:56:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:56.061 [2024-12-06 15:56:44.538357] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:56.061 [2024-12-06 15:56:44.538571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93306 ] 00:29:56.061 [2024-12-06 15:56:44.702179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.320 [2024-12-06 15:56:44.753837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.320 [2024-12-06 15:56:44.904033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:56.320 [2024-12-06 15:56:44.904178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:56.591 [2024-12-06 15:56:45.063698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.063747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:56.591 [2024-12-06 15:56:45.063789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:56.591 [2024-12-06 15:56:45.063810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.063877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.063894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.591 [2024-12-06 15:56:45.063905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:56.591 [2024-12-06 15:56:45.063927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.064002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:56.591 [2024-12-06 15:56:45.064318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:56.591 [2024-12-06 15:56:45.064353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.064365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.591 [2024-12-06 15:56:45.064382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:29:56.591 [2024-12-06 15:56:45.064392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.066757] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:56.591 [2024-12-06 15:56:45.070187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.070247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:56.591 [2024-12-06 15:56:45.070278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.432 ms 00:29:56.591 [2024-12-06 15:56:45.070319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.070384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.070401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:56.591 [2024-12-06 15:56:45.070412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:56.591 [2024-12-06 15:56:45.070422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.081730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.081776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.591 [2024-12-06 15:56:45.081811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.262 ms 00:29:56.591 [2024-12-06 15:56:45.081820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.081933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.081972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.591 [2024-12-06 15:56:45.081986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:56.591 [2024-12-06 15:56:45.081996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.082110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.082142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:56.591 [2024-12-06 15:56:45.082156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:56.591 [2024-12-06 15:56:45.082174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.082218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:56.591 [2024-12-06 15:56:45.084851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.084890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.591 [2024-12-06 15:56:45.084918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:29:56.591 [2024-12-06 15:56:45.084928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.084998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.591 [2024-12-06 15:56:45.085014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:56.591 [2024-12-06 15:56:45.085025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:56.591 [2024-12-06 15:56:45.085040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.591 [2024-12-06 15:56:45.085065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:56.591 [2024-12-06 15:56:45.085103] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:56.591 [2024-12-06 15:56:45.085192] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:56.591 [2024-12-06 15:56:45.085219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:56.591 [2024-12-06 15:56:45.085324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:56.591 [2024-12-06 15:56:45.085338] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:56.591 [2024-12-06 15:56:45.085360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:56.591 [2024-12-06 15:56:45.085374] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:56.592 [2024-12-06 15:56:45.085387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:56.592 [2024-12-06 15:56:45.085398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:56.592 [2024-12-06 15:56:45.085409] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:56.592 [2024-12-06 15:56:45.085419] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:56.592 [2024-12-06 15:56:45.085429] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:56.592 [2024-12-06 15:56:45.085441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.085461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:56.592 [2024-12-06 15:56:45.085471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:29:56.592 [2024-12-06 15:56:45.085482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.085569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.085588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:56.592 [2024-12-06 15:56:45.085599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:56.592 [2024-12-06 15:56:45.085620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.085734] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:56.592 [2024-12-06 15:56:45.085773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:56.592 [2024-12-06 15:56:45.085786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:56.592 [2024-12-06 15:56:45.085798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:56.592 [2024-12-06 15:56:45.085819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:56.592 [2024-12-06 15:56:45.085839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:56.592 [2024-12-06 15:56:45.085848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:56.592 [2024-12-06 15:56:45.085867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:56.592 [2024-12-06 15:56:45.085876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:56.592 [2024-12-06 15:56:45.085885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:56.592 [2024-12-06 15:56:45.085894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:56.592 [2024-12-06 15:56:45.085907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:56.592 [2024-12-06 15:56:45.085917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:56.592 [2024-12-06 15:56:45.085953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:56.592 [2024-12-06 15:56:45.085966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:56.592 [2024-12-06 15:56:45.085986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:56.592 [2024-12-06 15:56:45.085995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:56.592 [2024-12-06 15:56:45.086014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:56.592 [2024-12-06 15:56:45.086043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:56.592 [2024-12-06 15:56:45.086072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:56.592 [2024-12-06 15:56:45.086100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:56.592 [2024-12-06 15:56:45.086123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:56.592 [2024-12-06 15:56:45.086133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:56.592 [2024-12-06 15:56:45.086142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:56.592 [2024-12-06 15:56:45.086152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:56.592 [2024-12-06 15:56:45.086162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:56.592 [2024-12-06 15:56:45.086171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:56.592 [2024-12-06 15:56:45.086191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:56.592 [2024-12-06 15:56:45.086201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:56.592 [2024-12-06 15:56:45.086225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:56.592 [2024-12-06 15:56:45.086235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.592 [2024-12-06 15:56:45.086265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:56.592 [2024-12-06 15:56:45.086275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:56.592 [2024-12-06 15:56:45.086288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:56.592 [2024-12-06 15:56:45.086298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:56.592 [2024-12-06 15:56:45.086308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:56.592 [2024-12-06 15:56:45.086318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:56.592 [2024-12-06 15:56:45.086330] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:56.592 [2024-12-06 15:56:45.086343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:56.592 [2024-12-06 15:56:45.086365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:56.592 [2024-12-06 15:56:45.086376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:56.592 [2024-12-06 15:56:45.086386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:56.592 [2024-12-06 15:56:45.086396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:56.592 [2024-12-06 15:56:45.086406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:56.592 [2024-12-06 15:56:45.086416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:56.592 [2024-12-06 15:56:45.086426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:56.592 [2024-12-06 15:56:45.086437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:56.592 [2024-12-06 15:56:45.086458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:56.592 [2024-12-06 15:56:45.086516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:56.592 [2024-12-06 15:56:45.086527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.592 [2024-12-06 15:56:45.086550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:56.592 [2024-12-06 15:56:45.086560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:56.592 [2024-12-06 15:56:45.086570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:56.592 [2024-12-06 15:56:45.086582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.086593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:56.592 [2024-12-06 15:56:45.086604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:29:56.592 [2024-12-06 15:56:45.086620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.104578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.104653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.592 [2024-12-06 15:56:45.104686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.873 ms 00:29:56.592 [2024-12-06 15:56:45.104697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.104797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.104811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:56.592 [2024-12-06 15:56:45.104822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:56.592 [2024-12-06 15:56:45.104841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.127635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.127695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.592 [2024-12-06 15:56:45.127711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.671 ms 00:29:56.592 [2024-12-06 15:56:45.127723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.127775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.127791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.592 [2024-12-06 15:56:45.127802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.592 [2024-12-06 15:56:45.127813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.128706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.128737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.592 [2024-12-06 15:56:45.128751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:29:56.592 [2024-12-06 15:56:45.128790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.129016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.129037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.592 [2024-12-06 15:56:45.129051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:29:56.592 [2024-12-06 15:56:45.129062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.139653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.139690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.592 [2024-12-06 15:56:45.139705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.562 ms 00:29:56.592 [2024-12-06 15:56:45.139715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.143369] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:56.592 [2024-12-06 15:56:45.143408] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:56.592 [2024-12-06 15:56:45.143437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.143449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:56.592 [2024-12-06 15:56:45.143460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:29:56.592 [2024-12-06 15:56:45.143470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.592 [2024-12-06 15:56:45.156773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.592 [2024-12-06 15:56:45.156814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:56.593 [2024-12-06 15:56:45.156830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.261 ms 00:29:56.593 [2024-12-06 15:56:45.156841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.158801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.158837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:56.593 [2024-12-06 15:56:45.158851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:29:56.593 [2024-12-06 15:56:45.158860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.160510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.160543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:56.593 [2024-12-06 15:56:45.160555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:29:56.593 [2024-12-06 15:56:45.160564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.160966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.161008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:56.593 [2024-12-06 15:56:45.161023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:29:56.593 [2024-12-06 15:56:45.161049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.185501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.185594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:56.593 [2024-12-06 15:56:45.185614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.414 ms 00:29:56.593 [2024-12-06 15:56:45.185625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.192347] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:56.593 [2024-12-06 15:56:45.194512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.194545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:56.593 [2024-12-06 15:56:45.194559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.831 ms 00:29:56.593 [2024-12-06 15:56:45.194571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.194663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.194681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:56.593 [2024-12-06 15:56:45.194694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:56.593 [2024-12-06 15:56:45.194719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.197240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.197274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:56.593 [2024-12-06 15:56:45.197292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:29:56.593 [2024-12-06 15:56:45.197303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.197338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.197363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:56.593 [2024-12-06 15:56:45.197382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:56.593 [2024-12-06 15:56:45.197392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.197452] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:56.593 [2024-12-06 15:56:45.197469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.197479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:56.593 [2024-12-06 15:56:45.197508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:56.593 [2024-12-06 15:56:45.197535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.201839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.201876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:56.593 [2024-12-06 15:56:45.201891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:29:56.593 [2024-12-06 15:56:45.201901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.201990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.593 [2024-12-06 15:56:45.202008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:56.593 [2024-12-06 15:56:45.202021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:56.593 [2024-12-06 15:56:45.202038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.593 [2024-12-06 15:56:45.207918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.820 ms, result 0 00:29:57.967  [2024-12-06T15:56:47.598Z] Copying: 1000/1048576 [kB] (1000 kBps) [2024-12-06T15:56:48.536Z] Copying: 5844/1048576 [kB] (4844 kBps) [2024-12-06T15:56:49.475Z] Copying: 33/1024 [MB] (27 MBps) [2024-12-06T15:56:50.412Z] Copying: 61/1024 [MB] (28 MBps) [2024-12-06T15:56:51.791Z] Copying: 89/1024 [MB] (27 MBps) [2024-12-06T15:56:52.728Z] Copying: 116/1024 [MB] (27 MBps) [2024-12-06T15:56:53.664Z] Copying: 145/1024 [MB] (28 MBps) [2024-12-06T15:56:54.600Z] Copying: 173/1024 [MB] (28 MBps) [2024-12-06T15:56:55.555Z] Copying: 202/1024 [MB] (28 MBps) [2024-12-06T15:56:56.491Z] Copying: 230/1024 [MB] (28 MBps) [2024-12-06T15:56:57.429Z] Copying: 258/1024 [MB] (28 MBps) [2024-12-06T15:56:58.806Z] Copying: 286/1024 [MB] (28 MBps) [2024-12-06T15:56:59.390Z] Copying: 314/1024 [MB] (28 MBps) [2024-12-06T15:57:00.770Z] Copying: 343/1024 [MB] (28 MBps) [2024-12-06T15:57:01.727Z] Copying: 371/1024 [MB] (28 MBps) [2024-12-06T15:57:02.659Z] Copying: 399/1024 [MB] (27 MBps) [2024-12-06T15:57:03.593Z] Copying: 427/1024 [MB] (28 MBps) [2024-12-06T15:57:04.526Z] Copying: 456/1024 [MB] (28 MBps) [2024-12-06T15:57:05.461Z] Copying: 485/1024 [MB] (29 MBps) [2024-12-06T15:57:06.396Z] Copying: 514/1024 [MB] (29 MBps) [2024-12-06T15:57:07.773Z] Copying: 543/1024 [MB] (28 MBps) [2024-12-06T15:57:08.716Z] Copying: 572/1024 [MB] (28 MBps) [2024-12-06T15:57:09.653Z] Copying: 600/1024 [MB] (28 MBps) [2024-12-06T15:57:10.590Z] Copying: 629/1024 [MB] (28 MBps) [2024-12-06T15:57:11.527Z] Copying: 657/1024 [MB] (28 MBps) [2024-12-06T15:57:12.464Z] Copying: 686/1024 [MB] (28 MBps) [2024-12-06T15:57:13.403Z] Copying: 714/1024 [MB] (28 MBps) [2024-12-06T15:57:14.782Z] Copying: 743/1024 [MB] (28 MBps) [2024-12-06T15:57:15.718Z] Copying: 771/1024 [MB] (27 MBps) [2024-12-06T15:57:16.652Z] Copying: 799/1024 [MB] (27 MBps) [2024-12-06T15:57:17.590Z] Copying: 827/1024 [MB] (28 MBps) [2024-12-06T15:57:18.598Z] Copying: 856/1024 [MB] (28 MBps) [2024-12-06T15:57:19.538Z] Copying: 884/1024 [MB] (28 MBps) [2024-12-06T15:57:20.476Z] Copying: 912/1024 [MB] (28 MBps) [2024-12-06T15:57:21.412Z] Copying: 940/1024 [MB] (27 MBps) [2024-12-06T15:57:22.789Z] Copying: 966/1024 [MB] (26 MBps) [2024-12-06T15:57:23.358Z] Copying: 998/1024 [MB] (31 MBps) [2024-12-06T15:57:23.358Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 15:57:23.294820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.294964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:34.665 [2024-12-06 15:57:23.295005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:34.665 [2024-12-06 15:57:23.295052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.295133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:34.665 [2024-12-06 15:57:23.296645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.296726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:34.665 [2024-12-06 15:57:23.296754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:30:34.665 [2024-12-06 15:57:23.296776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.297390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.297446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:34.665 [2024-12-06 15:57:23.297473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:34.665 [2024-12-06 15:57:23.297517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.309672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.309752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:34.665 [2024-12-06 15:57:23.309793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.116 ms 00:30:34.665 [2024-12-06 15:57:23.309805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.315137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.315169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:34.665 [2024-12-06 15:57:23.315198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:30:34.665 [2024-12-06 15:57:23.315209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.316776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.316833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:34.665 [2024-12-06 15:57:23.316863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:30:34.665 [2024-12-06 15:57:23.316873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.321418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.321457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:34.665 [2024-12-06 15:57:23.321497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:30:34.665 [2024-12-06 15:57:23.321507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.323293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.323347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:34.665 [2024-12-06 15:57:23.323376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:30:34.665 [2024-12-06 15:57:23.323388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.325566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.325605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:34.665 [2024-12-06 15:57:23.325634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:30:34.665 [2024-12-06 15:57:23.325643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.665 [2024-12-06 15:57:23.327359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.665 [2024-12-06 15:57:23.327395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:34.666 [2024-12-06 15:57:23.327423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:30:34.666 [2024-12-06 15:57:23.327433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.666 [2024-12-06 15:57:23.328750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.666 [2024-12-06 15:57:23.328788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:34.666 [2024-12-06 15:57:23.328817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:30:34.666 [2024-12-06 15:57:23.328827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.666 [2024-12-06 15:57:23.330326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.666 [2024-12-06 15:57:23.330361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:34.666 [2024-12-06 15:57:23.330390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:30:34.666 [2024-12-06 15:57:23.330400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.666 [2024-12-06 15:57:23.330433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:34.666 [2024-12-06 15:57:23.330454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:34.666 [2024-12-06 15:57:23.330467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:34.666 [2024-12-06 15:57:23.330478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.330973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:34.666 [2024-12-06 15:57:23.331412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:34.667 [2024-12-06 15:57:23.331656] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:34.667 [2024-12-06 15:57:23.331688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4e0e507-6c02-47f6-bfa2-3cd271d0b40b 00:30:34.667 [2024-12-06 15:57:23.331704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:34.667 [2024-12-06 15:57:23.331714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137408 00:30:34.667 [2024-12-06 15:57:23.331725] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135424 00:30:34.667 [2024-12-06 15:57:23.331736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:30:34.667 [2024-12-06 15:57:23.331746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:34.667 [2024-12-06 15:57:23.331757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:34.667 [2024-12-06 15:57:23.331767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:34.667 [2024-12-06 15:57:23.331776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:34.667 [2024-12-06 15:57:23.331785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:34.667 [2024-12-06 15:57:23.331795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.667 [2024-12-06 15:57:23.331806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:34.667 [2024-12-06 15:57:23.331816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:30:34.667 [2024-12-06 15:57:23.331826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.334106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.667 [2024-12-06 15:57:23.334150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:34.667 [2024-12-06 15:57:23.334163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:30:34.667 [2024-12-06 15:57:23.334187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.334321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.667 [2024-12-06 15:57:23.334334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:34.667 [2024-12-06 15:57:23.334352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:30:34.667 [2024-12-06 15:57:23.334393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.342736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.667 [2024-12-06 15:57:23.342786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:34.667 [2024-12-06 15:57:23.342816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.667 [2024-12-06 15:57:23.342837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.342904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.667 [2024-12-06 15:57:23.342917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:34.667 [2024-12-06 15:57:23.342934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.667 [2024-12-06 15:57:23.342944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.343027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.667 [2024-12-06 15:57:23.343076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:34.667 [2024-12-06 15:57:23.343099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.667 [2024-12-06 15:57:23.343110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.667 [2024-12-06 15:57:23.343131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.667 [2024-12-06 15:57:23.343144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:34.667 [2024-12-06 15:57:23.343155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.667 [2024-12-06 15:57:23.343171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.357680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.357731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:34.926 [2024-12-06 15:57:23.357762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.357773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.369388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.369440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:34.926 [2024-12-06 15:57:23.369483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.369495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.369565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.369580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:34.926 [2024-12-06 15:57:23.369607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.369630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.369691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.369722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:34.926 [2024-12-06 15:57:23.369733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.369744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.369857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.369875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:34.926 [2024-12-06 15:57:23.369887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.369897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.369943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.370011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:34.926 [2024-12-06 15:57:23.370027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.370039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.370097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.370113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:34.926 [2024-12-06 15:57:23.370125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.370150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.370215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.926 [2024-12-06 15:57:23.370233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:34.926 [2024-12-06 15:57:23.370245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.926 [2024-12-06 15:57:23.370256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.926 [2024-12-06 15:57:23.370457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.630 ms, result 0 00:30:35.185 00:30:35.185 00:30:35.185 15:57:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:37.086 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:37.086 15:57:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:37.086 [2024-12-06 15:57:25.447924] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:30:37.086 [2024-12-06 15:57:25.448258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93721 ] 00:30:37.086 [2024-12-06 15:57:25.591175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.086 [2024-12-06 15:57:25.636848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.086 [2024-12-06 15:57:25.771573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:37.086 [2024-12-06 15:57:25.771656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:37.346 [2024-12-06 15:57:25.930631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.930680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:37.346 [2024-12-06 15:57:25.930700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:37.346 [2024-12-06 15:57:25.930710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.930781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.930801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:37.346 [2024-12-06 15:57:25.930812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:37.346 [2024-12-06 15:57:25.930839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.930876] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:37.346 [2024-12-06 15:57:25.931138] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:37.346 [2024-12-06 15:57:25.931170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.931182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:37.346 [2024-12-06 15:57:25.931200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:30:37.346 [2024-12-06 15:57:25.931210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.933281] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:37.346 [2024-12-06 15:57:25.936096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.936133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:37.346 [2024-12-06 15:57:25.936149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:30:37.346 [2024-12-06 15:57:25.936170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.936236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.936254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:37.346 [2024-12-06 15:57:25.936266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:37.346 [2024-12-06 15:57:25.936285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.944917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.944965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:37.346 [2024-12-06 15:57:25.944987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.568 ms 00:30:37.346 [2024-12-06 15:57:25.944998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.945110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.945128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:37.346 [2024-12-06 15:57:25.945143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:30:37.346 [2024-12-06 15:57:25.945153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.945239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.945273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:37.346 [2024-12-06 15:57:25.945285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:37.346 [2024-12-06 15:57:25.945310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.945366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:37.346 [2024-12-06 15:57:25.947560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.947594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:37.346 [2024-12-06 15:57:25.947619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:30:37.346 [2024-12-06 15:57:25.947629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.947681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.947697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:37.346 [2024-12-06 15:57:25.947708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:37.346 [2024-12-06 15:57:25.947726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.947764] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:37.346 [2024-12-06 15:57:25.947796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:37.346 [2024-12-06 15:57:25.947841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:37.346 [2024-12-06 15:57:25.947879] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:37.346 [2024-12-06 15:57:25.947990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:37.346 [2024-12-06 15:57:25.948008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:37.346 [2024-12-06 15:57:25.948029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:37.346 [2024-12-06 15:57:25.948043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:37.346 [2024-12-06 15:57:25.948055] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:37.346 [2024-12-06 15:57:25.948076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:37.346 [2024-12-06 15:57:25.948087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:37.346 [2024-12-06 15:57:25.948098] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:37.346 [2024-12-06 15:57:25.948108] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:37.346 [2024-12-06 15:57:25.948119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.948137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:37.346 [2024-12-06 15:57:25.948152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:30:37.346 [2024-12-06 15:57:25.948169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.948256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.346 [2024-12-06 15:57:25.948269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:37.346 [2024-12-06 15:57:25.948280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:37.346 [2024-12-06 15:57:25.948290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.346 [2024-12-06 15:57:25.948405] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:37.346 [2024-12-06 15:57:25.948432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:37.346 [2024-12-06 15:57:25.948445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:37.346 [2024-12-06 15:57:25.948455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.346 [2024-12-06 15:57:25.948482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:37.346 [2024-12-06 15:57:25.948493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:37.346 [2024-12-06 15:57:25.948503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:37.346 [2024-12-06 15:57:25.948513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:37.346 [2024-12-06 15:57:25.948522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:37.346 [2024-12-06 15:57:25.948537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:37.347 [2024-12-06 15:57:25.948547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:37.347 [2024-12-06 15:57:25.948556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:37.347 [2024-12-06 15:57:25.948565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:37.347 [2024-12-06 15:57:25.948575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:37.347 [2024-12-06 15:57:25.948589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:37.347 [2024-12-06 15:57:25.948611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:37.347 [2024-12-06 15:57:25.948631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:37.347 [2024-12-06 15:57:25.948674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:37.347 [2024-12-06 15:57:25.948703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:37.347 [2024-12-06 15:57:25.948741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:37.347 [2024-12-06 15:57:25.948769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:37.347 [2024-12-06 15:57:25.948798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:37.347 [2024-12-06 15:57:25.948816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:37.347 [2024-12-06 15:57:25.948826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:37.347 [2024-12-06 15:57:25.948835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:37.347 [2024-12-06 15:57:25.948860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:37.347 [2024-12-06 15:57:25.948869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:37.347 [2024-12-06 15:57:25.948878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:37.347 [2024-12-06 15:57:25.948900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:37.347 [2024-12-06 15:57:25.948911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948920] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:37.347 [2024-12-06 15:57:25.948934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:37.347 [2024-12-06 15:57:25.948945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:37.347 [2024-12-06 15:57:25.948969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:37.347 [2024-12-06 15:57:25.948984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:37.347 [2024-12-06 15:57:25.948994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:37.347 [2024-12-06 15:57:25.949004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:37.347 [2024-12-06 15:57:25.949013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:37.347 [2024-12-06 15:57:25.949023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:37.347 [2024-12-06 15:57:25.949032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:37.347 [2024-12-06 15:57:25.949043] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:37.347 [2024-12-06 15:57:25.949056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:37.347 [2024-12-06 15:57:25.949077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:37.347 [2024-12-06 15:57:25.949092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:37.347 [2024-12-06 15:57:25.949103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:37.347 [2024-12-06 15:57:25.949113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:37.347 [2024-12-06 15:57:25.949123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:37.347 [2024-12-06 15:57:25.949132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:37.347 [2024-12-06 15:57:25.949143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:37.347 [2024-12-06 15:57:25.949152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:37.347 [2024-12-06 15:57:25.949174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:37.347 [2024-12-06 15:57:25.949225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:37.347 [2024-12-06 15:57:25.949246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:37.347 [2024-12-06 15:57:25.949268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:37.347 [2024-12-06 15:57:25.949282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:37.347 [2024-12-06 15:57:25.949293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:37.347 [2024-12-06 15:57:25.949304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.949315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:37.347 [2024-12-06 15:57:25.949325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:30:37.347 [2024-12-06 15:57:25.949336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.967314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.967366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:37.347 [2024-12-06 15:57:25.967399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.892 ms 00:30:37.347 [2024-12-06 15:57:25.967411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.967510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.967525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:37.347 [2024-12-06 15:57:25.967537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:37.347 [2024-12-06 15:57:25.967547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.990945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.990988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:37.347 [2024-12-06 15:57:25.991006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.307 ms 00:30:37.347 [2024-12-06 15:57:25.991017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.991084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.991099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:37.347 [2024-12-06 15:57:25.991117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:37.347 [2024-12-06 15:57:25.991129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.991743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.991785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:37.347 [2024-12-06 15:57:25.991810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:30:37.347 [2024-12-06 15:57:25.991821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:25.992008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:25.992028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:37.347 [2024-12-06 15:57:25.992040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:30:37.347 [2024-12-06 15:57:25.992050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:26.001217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.347 [2024-12-06 15:57:26.001257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:37.347 [2024-12-06 15:57:26.001272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.139 ms 00:30:37.347 [2024-12-06 15:57:26.001282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.347 [2024-12-06 15:57:26.004263] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:37.347 [2024-12-06 15:57:26.004303] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:37.347 [2024-12-06 15:57:26.004327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.348 [2024-12-06 15:57:26.004338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:37.348 [2024-12-06 15:57:26.004349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:30:37.348 [2024-12-06 15:57:26.004358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.348 [2024-12-06 15:57:26.017565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.348 [2024-12-06 15:57:26.017606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:37.348 [2024-12-06 15:57:26.017621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.165 ms 00:30:37.348 [2024-12-06 15:57:26.017631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.348 [2024-12-06 15:57:26.019491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.348 [2024-12-06 15:57:26.019527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:37.348 [2024-12-06 15:57:26.019541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:30:37.348 [2024-12-06 15:57:26.019551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.348 [2024-12-06 15:57:26.021147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.348 [2024-12-06 15:57:26.021184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:37.348 [2024-12-06 15:57:26.021197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:30:37.348 [2024-12-06 15:57:26.021216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.348 [2024-12-06 15:57:26.021547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.348 [2024-12-06 15:57:26.021574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:37.348 [2024-12-06 15:57:26.021588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:30:37.348 [2024-12-06 15:57:26.021598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.046279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.046348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:37.606 [2024-12-06 15:57:26.046385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.650 ms 00:30:37.606 [2024-12-06 15:57:26.046410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.053047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:37.606 [2024-12-06 15:57:26.055293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.055332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:37.606 [2024-12-06 15:57:26.055348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.831 ms 00:30:37.606 [2024-12-06 15:57:26.055359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.055419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.055446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:37.606 [2024-12-06 15:57:26.055470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:37.606 [2024-12-06 15:57:26.055480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.056564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.056596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:37.606 [2024-12-06 15:57:26.056647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:30:37.606 [2024-12-06 15:57:26.056685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.056722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.056748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:37.606 [2024-12-06 15:57:26.056761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:37.606 [2024-12-06 15:57:26.056771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.056818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:37.606 [2024-12-06 15:57:26.056836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.056847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:37.606 [2024-12-06 15:57:26.056867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:37.606 [2024-12-06 15:57:26.056878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.060848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.060886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:37.606 [2024-12-06 15:57:26.060917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:30:37.606 [2024-12-06 15:57:26.060928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.061035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.606 [2024-12-06 15:57:26.061053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:37.606 [2024-12-06 15:57:26.061065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:37.606 [2024-12-06 15:57:26.061083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.606 [2024-12-06 15:57:26.062587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.404 ms, result 0 00:30:38.980  [2024-12-06T15:57:28.240Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T15:57:29.612Z] Copying: 51/1024 [MB] (25 MBps) [2024-12-06T15:57:30.545Z] Copying: 77/1024 [MB] (25 MBps) [2024-12-06T15:57:31.480Z] Copying: 104/1024 [MB] (26 MBps) [2024-12-06T15:57:32.416Z] Copying: 131/1024 [MB] (26 MBps) [2024-12-06T15:57:33.352Z] Copying: 158/1024 [MB] (27 MBps) [2024-12-06T15:57:34.288Z] Copying: 186/1024 [MB] (27 MBps) [2024-12-06T15:57:35.670Z] Copying: 213/1024 [MB] (27 MBps) [2024-12-06T15:57:36.236Z] Copying: 241/1024 [MB] (27 MBps) [2024-12-06T15:57:37.612Z] Copying: 267/1024 [MB] (26 MBps) [2024-12-06T15:57:38.548Z] Copying: 295/1024 [MB] (27 MBps) [2024-12-06T15:57:39.483Z] Copying: 322/1024 [MB] (26 MBps) [2024-12-06T15:57:40.420Z] Copying: 349/1024 [MB] (26 MBps) [2024-12-06T15:57:41.356Z] Copying: 375/1024 [MB] (26 MBps) [2024-12-06T15:57:42.293Z] Copying: 403/1024 [MB] (27 MBps) [2024-12-06T15:57:43.671Z] Copying: 430/1024 [MB] (27 MBps) [2024-12-06T15:57:44.239Z] Copying: 457/1024 [MB] (27 MBps) [2024-12-06T15:57:45.249Z] Copying: 485/1024 [MB] (27 MBps) [2024-12-06T15:57:46.623Z] Copying: 512/1024 [MB] (27 MBps) [2024-12-06T15:57:47.558Z] Copying: 539/1024 [MB] (27 MBps) [2024-12-06T15:57:48.493Z] Copying: 566/1024 [MB] (27 MBps) [2024-12-06T15:57:49.428Z] Copying: 593/1024 [MB] (26 MBps) [2024-12-06T15:57:50.364Z] Copying: 620/1024 [MB] (27 MBps) [2024-12-06T15:57:51.298Z] Copying: 647/1024 [MB] (26 MBps) [2024-12-06T15:57:52.695Z] Copying: 674/1024 [MB] (27 MBps) [2024-12-06T15:57:53.261Z] Copying: 702/1024 [MB] (27 MBps) [2024-12-06T15:57:54.632Z] Copying: 729/1024 [MB] (27 MBps) [2024-12-06T15:57:55.583Z] Copying: 755/1024 [MB] (25 MBps) [2024-12-06T15:57:56.520Z] Copying: 783/1024 [MB] (27 MBps) [2024-12-06T15:57:57.454Z] Copying: 810/1024 [MB] (27 MBps) [2024-12-06T15:57:58.389Z] Copying: 838/1024 [MB] (27 MBps) [2024-12-06T15:57:59.323Z] Copying: 865/1024 [MB] (27 MBps) [2024-12-06T15:58:00.258Z] Copying: 892/1024 [MB] (27 MBps) [2024-12-06T15:58:01.633Z] Copying: 919/1024 [MB] (27 MBps) [2024-12-06T15:58:02.567Z] Copying: 947/1024 [MB] (27 MBps) [2024-12-06T15:58:03.501Z] Copying: 974/1024 [MB] (27 MBps) [2024-12-06T15:58:04.068Z] Copying: 1002/1024 [MB] (27 MBps) [2024-12-06T15:58:04.329Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-06 15:58:04.268531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.268632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:15.636 [2024-12-06 15:58:04.268710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.636 [2024-12-06 15:58:04.268738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.268792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:15.636 [2024-12-06 15:58:04.270110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.270140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:15.636 [2024-12-06 15:58:04.270157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:31:15.636 [2024-12-06 15:58:04.270188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.270540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.270573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:15.636 [2024-12-06 15:58:04.270590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:31:15.636 [2024-12-06 15:58:04.270611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.276753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.276982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:15.636 [2024-12-06 15:58:04.277183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:31:15.636 [2024-12-06 15:58:04.277365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.286431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.286625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:15.636 [2024-12-06 15:58:04.286807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.971 ms 00:31:15.636 [2024-12-06 15:58:04.287010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.288760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.288972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:15.636 [2024-12-06 15:58:04.289141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:31:15.636 [2024-12-06 15:58:04.289214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.293454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.293658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:15.636 [2024-12-06 15:58:04.293813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.055 ms 00:31:15.636 [2024-12-06 15:58:04.293976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.295969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.296173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:15.636 [2024-12-06 15:58:04.296206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:31:15.636 [2024-12-06 15:58:04.296222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.298596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.298653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:15.636 [2024-12-06 15:58:04.298672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:31:15.636 [2024-12-06 15:58:04.298686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.300399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.300456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:15.636 [2024-12-06 15:58:04.300474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:31:15.636 [2024-12-06 15:58:04.300488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.301975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.302031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:15.636 [2024-12-06 15:58:04.302049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:31:15.636 [2024-12-06 15:58:04.302062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.303387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.636 [2024-12-06 15:58:04.303444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:15.636 [2024-12-06 15:58:04.303463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:31:15.636 [2024-12-06 15:58:04.303477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.636 [2024-12-06 15:58:04.303520] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:15.636 [2024-12-06 15:58:04.303548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:15.636 [2024-12-06 15:58:04.303567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:15.636 [2024-12-06 15:58:04.303583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:15.636 [2024-12-06 15:58:04.303701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.303994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:15.637 [2024-12-06 15:58:04.304628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.304985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:15.638 [2024-12-06 15:58:04.305145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:15.638 [2024-12-06 15:58:04.305160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b4e0e507-6c02-47f6-bfa2-3cd271d0b40b 00:31:15.638 [2024-12-06 15:58:04.305189] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:15.638 [2024-12-06 15:58:04.305204] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:15.638 [2024-12-06 15:58:04.305217] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:15.638 [2024-12-06 15:58:04.305233] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:15.638 [2024-12-06 15:58:04.305246] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:15.638 [2024-12-06 15:58:04.305272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:15.638 [2024-12-06 15:58:04.305286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:15.638 [2024-12-06 15:58:04.305298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:15.638 [2024-12-06 15:58:04.305311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:15.638 [2024-12-06 15:58:04.305325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.638 [2024-12-06 15:58:04.305350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:15.638 [2024-12-06 15:58:04.305366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:31:15.638 [2024-12-06 15:58:04.305380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.308397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.638 [2024-12-06 15:58:04.308441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:15.638 [2024-12-06 15:58:04.308460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:31:15.638 [2024-12-06 15:58:04.308474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.308700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.638 [2024-12-06 15:58:04.308721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:15.638 [2024-12-06 15:58:04.308737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:31:15.638 [2024-12-06 15:58:04.308766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.319083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.638 [2024-12-06 15:58:04.319133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.638 [2024-12-06 15:58:04.319152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.638 [2024-12-06 15:58:04.319176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.319257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.638 [2024-12-06 15:58:04.319304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.638 [2024-12-06 15:58:04.319320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.638 [2024-12-06 15:58:04.319335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.319436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.638 [2024-12-06 15:58:04.319468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.638 [2024-12-06 15:58:04.319486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.638 [2024-12-06 15:58:04.319501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.638 [2024-12-06 15:58:04.319539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.638 [2024-12-06 15:58:04.319556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.638 [2024-12-06 15:58:04.319571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.638 [2024-12-06 15:58:04.319585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.336124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.336201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.898 [2024-12-06 15:58:04.336224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.336253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.348716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.348780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.898 [2024-12-06 15:58:04.348802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.348817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.348916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.348957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.898 [2024-12-06 15:58:04.348977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.348991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.349091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.898 [2024-12-06 15:58:04.349109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.349124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.349282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.898 [2024-12-06 15:58:04.349301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.349316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.349414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:15.898 [2024-12-06 15:58:04.349431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.349445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.349529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.898 [2024-12-06 15:58:04.349545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.349576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:15.898 [2024-12-06 15:58:04.349690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.898 [2024-12-06 15:58:04.349709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:15.898 [2024-12-06 15:58:04.349741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.898 [2024-12-06 15:58:04.349966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.362 ms, result 0 00:31:16.156 00:31:16.156 00:31:16.156 15:58:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:18.058 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:18.058 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:18.058 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:18.058 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:18.058 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:18.058 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:18.316 Process with pid 91821 is not found 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91821 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91821 ']' 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91821 00:31:18.316 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91821) - No such process 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91821 is not found' 00:31:18.316 15:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:18.575 Remove shared memory files 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:18.575 ************************************ 00:31:18.575 END TEST ftl_dirty_shutdown 00:31:18.575 ************************************ 00:31:18.575 00:31:18.575 real 3m44.962s 00:31:18.575 user 4m16.919s 00:31:18.575 sys 0m36.822s 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.575 15:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 15:58:07 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:18.575 15:58:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:18.575 15:58:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.575 15:58:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 ************************************ 00:31:18.575 START TEST ftl_upgrade_shutdown 00:31:18.575 ************************************ 00:31:18.575 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:18.575 * Looking for test storage... 00:31:18.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:18.575 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:18.575 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:18.575 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.834 --rc genhtml_branch_coverage=1 00:31:18.834 --rc genhtml_function_coverage=1 00:31:18.834 --rc genhtml_legend=1 00:31:18.834 --rc geninfo_all_blocks=1 00:31:18.834 --rc geninfo_unexecuted_blocks=1 00:31:18.834 00:31:18.834 ' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.834 --rc genhtml_branch_coverage=1 00:31:18.834 --rc genhtml_function_coverage=1 00:31:18.834 --rc genhtml_legend=1 00:31:18.834 --rc geninfo_all_blocks=1 00:31:18.834 --rc geninfo_unexecuted_blocks=1 00:31:18.834 00:31:18.834 ' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.834 --rc genhtml_branch_coverage=1 00:31:18.834 --rc genhtml_function_coverage=1 00:31:18.834 --rc genhtml_legend=1 00:31:18.834 --rc geninfo_all_blocks=1 00:31:18.834 --rc geninfo_unexecuted_blocks=1 00:31:18.834 00:31:18.834 ' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:18.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.834 --rc genhtml_branch_coverage=1 00:31:18.834 --rc genhtml_function_coverage=1 00:31:18.834 --rc genhtml_legend=1 00:31:18.834 --rc geninfo_all_blocks=1 00:31:18.834 --rc geninfo_unexecuted_blocks=1 00:31:18.834 00:31:18.834 ' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:18.834 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94196 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94196 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94196 ']' 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.835 15:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:18.835 [2024-12-06 15:58:07.488086] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:18.835 [2024-12-06 15:58:07.488319] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94196 ] 00:31:19.095 [2024-12-06 15:58:07.655299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.095 [2024-12-06 15:58:07.709671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:20.033 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:20.292 15:58:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:20.552 { 00:31:20.552 "name": "basen1", 00:31:20.552 "aliases": [ 00:31:20.552 "0da4a6ab-9008-4e8a-b24d-328a965fa6f8" 00:31:20.552 ], 00:31:20.552 "product_name": "NVMe disk", 00:31:20.552 "block_size": 4096, 00:31:20.552 "num_blocks": 1310720, 00:31:20.552 "uuid": "0da4a6ab-9008-4e8a-b24d-328a965fa6f8", 00:31:20.552 "numa_id": -1, 00:31:20.552 "assigned_rate_limits": { 00:31:20.552 "rw_ios_per_sec": 0, 00:31:20.552 "rw_mbytes_per_sec": 0, 00:31:20.552 "r_mbytes_per_sec": 0, 00:31:20.552 "w_mbytes_per_sec": 0 00:31:20.552 }, 00:31:20.552 "claimed": true, 00:31:20.552 "claim_type": "read_many_write_one", 00:31:20.552 "zoned": false, 00:31:20.552 "supported_io_types": { 00:31:20.552 "read": true, 00:31:20.552 "write": true, 00:31:20.552 "unmap": true, 00:31:20.552 "flush": true, 00:31:20.552 "reset": true, 00:31:20.552 "nvme_admin": true, 00:31:20.552 "nvme_io": true, 00:31:20.552 "nvme_io_md": false, 00:31:20.552 "write_zeroes": true, 00:31:20.552 "zcopy": false, 00:31:20.552 "get_zone_info": false, 00:31:20.552 "zone_management": false, 00:31:20.552 "zone_append": false, 00:31:20.552 "compare": true, 00:31:20.552 "compare_and_write": false, 00:31:20.552 "abort": true, 00:31:20.552 "seek_hole": false, 00:31:20.552 "seek_data": false, 00:31:20.552 "copy": true, 00:31:20.552 "nvme_iov_md": false 00:31:20.552 }, 00:31:20.552 "driver_specific": { 00:31:20.552 "nvme": [ 00:31:20.552 { 00:31:20.552 "pci_address": "0000:00:11.0", 00:31:20.552 "trid": { 00:31:20.552 "trtype": "PCIe", 00:31:20.552 "traddr": "0000:00:11.0" 00:31:20.552 }, 00:31:20.552 "ctrlr_data": { 00:31:20.552 "cntlid": 0, 00:31:20.552 "vendor_id": "0x1b36", 00:31:20.552 "model_number": "QEMU NVMe Ctrl", 00:31:20.552 "serial_number": "12341", 00:31:20.552 "firmware_revision": "8.0.0", 00:31:20.552 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:20.552 "oacs": { 00:31:20.552 "security": 0, 00:31:20.552 "format": 1, 00:31:20.552 "firmware": 0, 00:31:20.552 "ns_manage": 1 00:31:20.552 }, 00:31:20.552 "multi_ctrlr": false, 00:31:20.552 "ana_reporting": false 00:31:20.552 }, 00:31:20.552 "vs": { 00:31:20.552 "nvme_version": "1.4" 00:31:20.552 }, 00:31:20.552 "ns_data": { 00:31:20.552 "id": 1, 00:31:20.552 "can_share": false 00:31:20.552 } 00:31:20.552 } 00:31:20.552 ], 00:31:20.552 "mp_policy": "active_passive" 00:31:20.552 } 00:31:20.552 } 00:31:20.552 ]' 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:20.552 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:21.120 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=b30ea659-6775-439b-958c-14a06bb7e9ea 00:31:21.120 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:21.120 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b30ea659-6775-439b-958c-14a06bb7e9ea 00:31:21.379 15:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=fe4e73d9-3f27-4391-959c-3cd81520ad2b 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u fe4e73d9-3f27-4391-959c-3cd81520ad2b 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c76dce0b-881a-45ed-b593-fb65d6ec1292 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c76dce0b-881a-45ed-b593-fb65d6ec1292 ]] 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c76dce0b-881a-45ed-b593-fb65d6ec1292 5120 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c76dce0b-881a-45ed-b593-fb65d6ec1292 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c76dce0b-881a-45ed-b593-fb65d6ec1292 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c76dce0b-881a-45ed-b593-fb65d6ec1292 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:21.637 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c76dce0b-881a-45ed-b593-fb65d6ec1292 00:31:21.896 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:21.896 { 00:31:21.896 "name": "c76dce0b-881a-45ed-b593-fb65d6ec1292", 00:31:21.896 "aliases": [ 00:31:21.896 "lvs/basen1p0" 00:31:21.896 ], 00:31:21.896 "product_name": "Logical Volume", 00:31:21.896 "block_size": 4096, 00:31:21.896 "num_blocks": 5242880, 00:31:21.896 "uuid": "c76dce0b-881a-45ed-b593-fb65d6ec1292", 00:31:21.896 "assigned_rate_limits": { 00:31:21.896 "rw_ios_per_sec": 0, 00:31:21.896 "rw_mbytes_per_sec": 0, 00:31:21.896 "r_mbytes_per_sec": 0, 00:31:21.896 "w_mbytes_per_sec": 0 00:31:21.896 }, 00:31:21.896 "claimed": false, 00:31:21.896 "zoned": false, 00:31:21.896 "supported_io_types": { 00:31:21.896 "read": true, 00:31:21.896 "write": true, 00:31:21.896 "unmap": true, 00:31:21.896 "flush": false, 00:31:21.896 "reset": true, 00:31:21.896 "nvme_admin": false, 00:31:21.896 "nvme_io": false, 00:31:21.896 "nvme_io_md": false, 00:31:21.896 "write_zeroes": true, 00:31:21.896 "zcopy": false, 00:31:21.896 "get_zone_info": false, 00:31:21.896 "zone_management": false, 00:31:21.896 "zone_append": false, 00:31:21.896 "compare": false, 00:31:21.896 "compare_and_write": false, 00:31:21.896 "abort": false, 00:31:21.896 "seek_hole": true, 00:31:21.896 "seek_data": true, 00:31:21.896 "copy": false, 00:31:21.896 "nvme_iov_md": false 00:31:21.896 }, 00:31:21.896 "driver_specific": { 00:31:21.896 "lvol": { 00:31:21.896 "lvol_store_uuid": "fe4e73d9-3f27-4391-959c-3cd81520ad2b", 00:31:21.896 "base_bdev": "basen1", 00:31:21.896 "thin_provision": true, 00:31:21.896 "num_allocated_clusters": 0, 00:31:21.896 "snapshot": false, 00:31:21.896 "clone": false, 00:31:21.896 "esnap_clone": false 00:31:21.896 } 00:31:21.896 } 00:31:21.896 } 00:31:21.896 ]' 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:22.156 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:22.415 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:22.415 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:22.415 15:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:22.674 15:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:22.674 15:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:22.674 15:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c76dce0b-881a-45ed-b593-fb65d6ec1292 -c cachen1p0 --l2p_dram_limit 2 00:31:22.934 [2024-12-06 15:58:11.515008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.515119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:22.934 [2024-12-06 15:58:11.515140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:22.934 [2024-12-06 15:58:11.515156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.515233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.515257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:22.934 [2024-12-06 15:58:11.515272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:22.934 [2024-12-06 15:58:11.515297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.515343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:22.934 [2024-12-06 15:58:11.515694] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:22.934 [2024-12-06 15:58:11.515732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.515761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:22.934 [2024-12-06 15:58:11.515775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:31:22.934 [2024-12-06 15:58:11.515789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.516000] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c3a705be-1267-4d4c-a4d4-d257e9b4f953 00:31:22.934 [2024-12-06 15:58:11.518276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.518344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:22.934 [2024-12-06 15:58:11.518385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:22.934 [2024-12-06 15:58:11.518398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.530340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.530395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:22.934 [2024-12-06 15:58:11.530432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.884 ms 00:31:22.934 [2024-12-06 15:58:11.530443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.530518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.530541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:22.934 [2024-12-06 15:58:11.530563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:22.934 [2024-12-06 15:58:11.530575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.530733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.530753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:22.934 [2024-12-06 15:58:11.530779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:22.934 [2024-12-06 15:58:11.530790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.530829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:22.934 [2024-12-06 15:58:11.533565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.533601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:22.934 [2024-12-06 15:58:11.533631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.751 ms 00:31:22.934 [2024-12-06 15:58:11.533654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.533692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.533710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:22.934 [2024-12-06 15:58:11.533721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:22.934 [2024-12-06 15:58:11.533746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.533770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:22.934 [2024-12-06 15:58:11.533982] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:22.934 [2024-12-06 15:58:11.534018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:22.934 [2024-12-06 15:58:11.534038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:22.934 [2024-12-06 15:58:11.534053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:22.934 [2024-12-06 15:58:11.534074] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:22.934 [2024-12-06 15:58:11.534086] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:22.934 [2024-12-06 15:58:11.534103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:22.934 [2024-12-06 15:58:11.534113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:22.934 [2024-12-06 15:58:11.534126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:22.934 [2024-12-06 15:58:11.534139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.534152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:22.934 [2024-12-06 15:58:11.534165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:31:22.934 [2024-12-06 15:58:11.534178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.534270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.934 [2024-12-06 15:58:11.534290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:22.934 [2024-12-06 15:58:11.534302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:31:22.934 [2024-12-06 15:58:11.534316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.934 [2024-12-06 15:58:11.534427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:22.934 [2024-12-06 15:58:11.534463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:22.934 [2024-12-06 15:58:11.534476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:22.934 [2024-12-06 15:58:11.534500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.934 [2024-12-06 15:58:11.534512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:22.934 [2024-12-06 15:58:11.534525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:22.934 [2024-12-06 15:58:11.534535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:22.934 [2024-12-06 15:58:11.534548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:22.934 [2024-12-06 15:58:11.534558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:22.934 [2024-12-06 15:58:11.534570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:22.935 [2024-12-06 15:58:11.534593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:22.935 [2024-12-06 15:58:11.534603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:22.935 [2024-12-06 15:58:11.534628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:22.935 [2024-12-06 15:58:11.534640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:22.935 [2024-12-06 15:58:11.534663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:22.935 [2024-12-06 15:58:11.534672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:22.935 [2024-12-06 15:58:11.534694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:22.935 [2024-12-06 15:58:11.534708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:22.935 [2024-12-06 15:58:11.534719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:22.935 [2024-12-06 15:58:11.534732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:22.935 [2024-12-06 15:58:11.534741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:22.935 [2024-12-06 15:58:11.534754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:22.935 [2024-12-06 15:58:11.534764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:22.935 [2024-12-06 15:58:11.534779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:22.935 [2024-12-06 15:58:11.534789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:22.935 [2024-12-06 15:58:11.534805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:22.935 [2024-12-06 15:58:11.534815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:22.935 [2024-12-06 15:58:11.534827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:22.935 [2024-12-06 15:58:11.534837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:22.935 [2024-12-06 15:58:11.534849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:22.935 [2024-12-06 15:58:11.534872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:22.935 [2024-12-06 15:58:11.534882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:22.935 [2024-12-06 15:58:11.534905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.534928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:22.935 [2024-12-06 15:58:11.534970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:22.935 [2024-12-06 15:58:11.534999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.535012] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:22.935 [2024-12-06 15:58:11.535024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:22.935 [2024-12-06 15:58:11.535041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:22.935 [2024-12-06 15:58:11.535053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:22.935 [2024-12-06 15:58:11.535068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:22.935 [2024-12-06 15:58:11.535093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:22.935 [2024-12-06 15:58:11.535107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:22.935 [2024-12-06 15:58:11.535118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:22.935 [2024-12-06 15:58:11.535134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:22.935 [2024-12-06 15:58:11.535145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:22.935 [2024-12-06 15:58:11.535162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:22.935 [2024-12-06 15:58:11.535180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:22.935 [2024-12-06 15:58:11.535208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:22.935 [2024-12-06 15:58:11.535249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:22.935 [2024-12-06 15:58:11.535261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:22.935 [2024-12-06 15:58:11.535277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:22.935 [2024-12-06 15:58:11.535289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:22.935 [2024-12-06 15:58:11.535407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:22.935 [2024-12-06 15:58:11.535427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:22.935 [2024-12-06 15:58:11.535454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:22.935 [2024-12-06 15:58:11.535468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:22.935 [2024-12-06 15:58:11.535479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:22.935 [2024-12-06 15:58:11.535495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.935 [2024-12-06 15:58:11.535507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:22.935 [2024-12-06 15:58:11.535524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.124 ms 00:31:22.935 [2024-12-06 15:58:11.535535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.935 [2024-12-06 15:58:11.535640] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:22.935 [2024-12-06 15:58:11.535671] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:26.223 [2024-12-06 15:58:14.701347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.701451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:26.223 [2024-12-06 15:58:14.701496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3165.719 ms 00:31:26.223 [2024-12-06 15:58:14.701508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.716904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.716979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:26.223 [2024-12-06 15:58:14.717020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.254 ms 00:31:26.223 [2024-12-06 15:58:14.717032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.717136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.717156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:26.223 [2024-12-06 15:58:14.717171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:26.223 [2024-12-06 15:58:14.717194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.733599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.733646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:26.223 [2024-12-06 15:58:14.733683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.291 ms 00:31:26.223 [2024-12-06 15:58:14.733694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.733750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.733764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:26.223 [2024-12-06 15:58:14.733780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:26.223 [2024-12-06 15:58:14.733800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.734724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.734771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:26.223 [2024-12-06 15:58:14.734805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.771 ms 00:31:26.223 [2024-12-06 15:58:14.734817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.734883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.734899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:26.223 [2024-12-06 15:58:14.734913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:26.223 [2024-12-06 15:58:14.734924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.746276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.746313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:26.223 [2024-12-06 15:58:14.746348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.322 ms 00:31:26.223 [2024-12-06 15:58:14.746360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.764590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:26.223 [2024-12-06 15:58:14.766421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.766459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:26.223 [2024-12-06 15:58:14.766491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.954 ms 00:31:26.223 [2024-12-06 15:58:14.766504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.783116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.783164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:26.223 [2024-12-06 15:58:14.783202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.574 ms 00:31:26.223 [2024-12-06 15:58:14.783219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.223 [2024-12-06 15:58:14.783322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.223 [2024-12-06 15:58:14.783343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:26.223 [2024-12-06 15:58:14.783356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:31:26.224 [2024-12-06 15:58:14.783369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.786647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.786707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:26.224 [2024-12-06 15:58:14.786724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.188 ms 00:31:26.224 [2024-12-06 15:58:14.786741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.790056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.790116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:26.224 [2024-12-06 15:58:14.790131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.273 ms 00:31:26.224 [2024-12-06 15:58:14.790143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.790549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.790585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:26.224 [2024-12-06 15:58:14.790601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.366 ms 00:31:26.224 [2024-12-06 15:58:14.790629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.827237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.827306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:26.224 [2024-12-06 15:58:14.827328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.580 ms 00:31:26.224 [2024-12-06 15:58:14.827343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.832522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.832582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:26.224 [2024-12-06 15:58:14.832599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.092 ms 00:31:26.224 [2024-12-06 15:58:14.832613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.836267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.836326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:26.224 [2024-12-06 15:58:14.836342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.612 ms 00:31:26.224 [2024-12-06 15:58:14.836354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.840284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.840361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:26.224 [2024-12-06 15:58:14.840377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.885 ms 00:31:26.224 [2024-12-06 15:58:14.840392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.840451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.840474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:26.224 [2024-12-06 15:58:14.840486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:26.224 [2024-12-06 15:58:14.840499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.840612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.224 [2024-12-06 15:58:14.840633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:26.224 [2024-12-06 15:58:14.840645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:31:26.224 [2024-12-06 15:58:14.840672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.224 [2024-12-06 15:58:14.842275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3326.776 ms, result 0 00:31:26.224 { 00:31:26.224 "name": "ftl", 00:31:26.224 "uuid": "c3a705be-1267-4d4c-a4d4-d257e9b4f953" 00:31:26.224 } 00:31:26.224 15:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:26.482 [2024-12-06 15:58:15.108055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.482 15:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:27.047 15:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:27.047 [2024-12-06 15:58:15.640645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:27.047 15:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:27.305 [2024-12-06 15:58:15.853122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:27.305 15:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:27.563 Fill FTL, iteration 1 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94324 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94324 /var/tmp/spdk.tgt.sock 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94324 ']' 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.563 15:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:27.822 [2024-12-06 15:58:16.348525] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:27.822 [2024-12-06 15:58:16.348759] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94324 ] 00:31:27.822 [2024-12-06 15:58:16.513357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.080 [2024-12-06 15:58:16.556386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.649 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.649 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:28.649 15:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:28.918 ftln1 00:31:28.918 15:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:28.918 15:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94324 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94324 ']' 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94324 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94324 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94324' 00:31:29.225 killing process with pid 94324 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94324 00:31:29.225 15:58:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94324 00:31:29.804 15:58:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:29.804 15:58:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:29.804 [2024-12-06 15:58:18.441622] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:29.804 [2024-12-06 15:58:18.441816] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94355 ] 00:31:30.062 [2024-12-06 15:58:18.599018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.062 [2024-12-06 15:58:18.635515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.437  [2024-12-06T15:58:21.065Z] Copying: 266/1024 [MB] (266 MBps) [2024-12-06T15:58:22.002Z] Copying: 530/1024 [MB] (264 MBps) [2024-12-06T15:58:22.940Z] Copying: 790/1024 [MB] (260 MBps) [2024-12-06T15:58:23.199Z] Copying: 1024/1024 [MB] (average 262 MBps) 00:31:34.506 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:34.506 Calculate MD5 checksum, iteration 1 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:34.506 15:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:34.506 [2024-12-06 15:58:23.125822] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:34.506 [2024-12-06 15:58:23.125995] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94404 ] 00:31:34.764 [2024-12-06 15:58:23.269618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.764 [2024-12-06 15:58:23.299711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.142  [2024-12-06T15:58:25.771Z] Copying: 438/1024 [MB] (438 MBps) [2024-12-06T15:58:26.030Z] Copying: 878/1024 [MB] (440 MBps) [2024-12-06T15:58:26.288Z] Copying: 1024/1024 [MB] (average 438 MBps) 00:31:37.595 00:31:37.595 15:58:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:37.595 15:58:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:39.498 Fill FTL, iteration 2 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0ef327e0fd117a4b88da508b34587485 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:39.498 15:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:39.498 [2024-12-06 15:58:28.136853] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:39.498 [2024-12-06 15:58:28.137103] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94463 ] 00:31:39.756 [2024-12-06 15:58:28.300931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.756 [2024-12-06 15:58:28.340733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.134  [2024-12-06T15:58:30.765Z] Copying: 265/1024 [MB] (265 MBps) [2024-12-06T15:58:31.704Z] Copying: 528/1024 [MB] (263 MBps) [2024-12-06T15:58:32.642Z] Copying: 790/1024 [MB] (262 MBps) [2024-12-06T15:58:32.901Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:31:44.208 00:31:44.208 Calculate MD5 checksum, iteration 2 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:44.208 15:58:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:44.208 [2024-12-06 15:58:32.807060] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:44.208 [2024-12-06 15:58:32.807488] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94511 ] 00:31:44.467 [2024-12-06 15:58:32.952416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.468 [2024-12-06 15:58:32.983307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.846  [2024-12-06T15:58:35.475Z] Copying: 438/1024 [MB] (438 MBps) [2024-12-06T15:58:36.041Z] Copying: 864/1024 [MB] (426 MBps) [2024-12-06T15:58:36.607Z] Copying: 1024/1024 [MB] (average 430 MBps) 00:31:47.915 00:31:47.915 15:58:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:47.915 15:58:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:49.817 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:49.817 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=05f06c957ca4baa040f7dd82348534e1 00:31:49.817 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:49.817 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:49.817 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:50.075 [2024-12-06 15:58:38.528369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.075 [2024-12-06 15:58:38.528432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:50.075 [2024-12-06 15:58:38.528471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:50.075 [2024-12-06 15:58:38.528484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.075 [2024-12-06 15:58:38.528532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.075 [2024-12-06 15:58:38.528551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:50.075 [2024-12-06 15:58:38.528564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:50.075 [2024-12-06 15:58:38.528577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.075 [2024-12-06 15:58:38.528607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.075 [2024-12-06 15:58:38.528624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:50.075 [2024-12-06 15:58:38.528636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:50.075 [2024-12-06 15:58:38.528657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.075 [2024-12-06 15:58:38.528773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.384 ms, result 0 00:31:50.075 true 00:31:50.075 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:50.335 { 00:31:50.335 "name": "ftl", 00:31:50.335 "properties": [ 00:31:50.335 { 00:31:50.335 "name": "superblock_version", 00:31:50.335 "value": 5, 00:31:50.335 "read-only": true 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "name": "base_device", 00:31:50.335 "bands": [ 00:31:50.335 { 00:31:50.335 "id": 0, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 1, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 2, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 3, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 4, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 5, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 6, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 7, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 8, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 9, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 10, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 11, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 12, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 13, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 14, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 15, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 16, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 17, 00:31:50.335 "state": "FREE", 00:31:50.335 "validity": 0.0 00:31:50.335 } 00:31:50.335 ], 00:31:50.335 "read-only": true 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "name": "cache_device", 00:31:50.335 "type": "bdev", 00:31:50.335 "chunks": [ 00:31:50.335 { 00:31:50.335 "id": 0, 00:31:50.335 "state": "INACTIVE", 00:31:50.335 "utilization": 0.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 1, 00:31:50.335 "state": "CLOSED", 00:31:50.335 "utilization": 1.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 2, 00:31:50.335 "state": "CLOSED", 00:31:50.335 "utilization": 1.0 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 3, 00:31:50.335 "state": "OPEN", 00:31:50.335 "utilization": 0.001953125 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "id": 4, 00:31:50.335 "state": "OPEN", 00:31:50.335 "utilization": 0.0 00:31:50.335 } 00:31:50.335 ], 00:31:50.335 "read-only": true 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "name": "verbose_mode", 00:31:50.335 "value": true, 00:31:50.335 "unit": "", 00:31:50.335 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:50.335 }, 00:31:50.335 { 00:31:50.335 "name": "prep_upgrade_on_shutdown", 00:31:50.335 "value": false, 00:31:50.335 "unit": "", 00:31:50.335 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:50.335 } 00:31:50.335 ] 00:31:50.335 } 00:31:50.335 15:58:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:50.335 [2024-12-06 15:58:38.992783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.335 [2024-12-06 15:58:38.992833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:50.335 [2024-12-06 15:58:38.992852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:50.335 [2024-12-06 15:58:38.992865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.335 [2024-12-06 15:58:38.992901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.335 [2024-12-06 15:58:38.992919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:50.335 [2024-12-06 15:58:38.992933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:50.335 [2024-12-06 15:58:38.992963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.335 [2024-12-06 15:58:38.993027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.335 [2024-12-06 15:58:38.993044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:50.335 [2024-12-06 15:58:38.993056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:50.335 [2024-12-06 15:58:38.993067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.335 [2024-12-06 15:58:38.993145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.337 ms, result 0 00:31:50.335 true 00:31:50.335 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:50.335 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:50.335 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:50.594 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:50.595 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:50.595 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:50.853 [2024-12-06 15:58:39.397167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.854 [2024-12-06 15:58:39.397213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:50.854 [2024-12-06 15:58:39.397231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:50.854 [2024-12-06 15:58:39.397242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.854 [2024-12-06 15:58:39.397275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.854 [2024-12-06 15:58:39.397292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:50.854 [2024-12-06 15:58:39.397305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:50.854 [2024-12-06 15:58:39.397315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.854 [2024-12-06 15:58:39.397342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.854 [2024-12-06 15:58:39.397356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:50.854 [2024-12-06 15:58:39.397368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:50.854 [2024-12-06 15:58:39.397379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.854 [2024-12-06 15:58:39.397442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.258 ms, result 0 00:31:50.854 true 00:31:50.854 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.113 { 00:31:51.113 "name": "ftl", 00:31:51.113 "properties": [ 00:31:51.113 { 00:31:51.113 "name": "superblock_version", 00:31:51.113 "value": 5, 00:31:51.113 "read-only": true 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "name": "base_device", 00:31:51.113 "bands": [ 00:31:51.113 { 00:31:51.113 "id": 0, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 1, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 2, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 3, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 4, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 5, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 6, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 7, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 8, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 9, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 10, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 11, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 12, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 13, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 14, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 15, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 16, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 17, 00:31:51.113 "state": "FREE", 00:31:51.113 "validity": 0.0 00:31:51.113 } 00:31:51.113 ], 00:31:51.113 "read-only": true 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "name": "cache_device", 00:31:51.113 "type": "bdev", 00:31:51.113 "chunks": [ 00:31:51.113 { 00:31:51.113 "id": 0, 00:31:51.113 "state": "INACTIVE", 00:31:51.113 "utilization": 0.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 1, 00:31:51.113 "state": "CLOSED", 00:31:51.113 "utilization": 1.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 2, 00:31:51.113 "state": "CLOSED", 00:31:51.113 "utilization": 1.0 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 3, 00:31:51.113 "state": "OPEN", 00:31:51.113 "utilization": 0.001953125 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "id": 4, 00:31:51.113 "state": "OPEN", 00:31:51.113 "utilization": 0.0 00:31:51.113 } 00:31:51.113 ], 00:31:51.113 "read-only": true 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "name": "verbose_mode", 00:31:51.113 "value": true, 00:31:51.113 "unit": "", 00:31:51.113 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:51.113 }, 00:31:51.113 { 00:31:51.113 "name": "prep_upgrade_on_shutdown", 00:31:51.113 "value": true, 00:31:51.113 "unit": "", 00:31:51.113 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:51.113 } 00:31:51.113 ] 00:31:51.113 } 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94196 ]] 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94196 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94196 ']' 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94196 00:31:51.113 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94196 00:31:51.114 killing process with pid 94196 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94196' 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94196 00:31:51.114 15:58:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94196 00:31:51.373 [2024-12-06 15:58:39.843890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:51.373 [2024-12-06 15:58:39.850501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.373 [2024-12-06 15:58:39.850546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:51.373 [2024-12-06 15:58:39.850587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:51.373 [2024-12-06 15:58:39.850600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.373 [2024-12-06 15:58:39.850636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:51.373 [2024-12-06 15:58:39.851485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.373 [2024-12-06 15:58:39.851518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:51.373 [2024-12-06 15:58:39.851557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:31:51.373 [2024-12-06 15:58:39.851570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.137201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.137317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:59.505 [2024-12-06 15:58:47.137340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7285.637 ms 00:31:59.505 [2024-12-06 15:58:47.137352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.138581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.138634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:59.505 [2024-12-06 15:58:47.138649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:31:59.505 [2024-12-06 15:58:47.138661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.139866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.139904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:59.505 [2024-12-06 15:58:47.139916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:31:59.505 [2024-12-06 15:58:47.139927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.142426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.142500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:59.505 [2024-12-06 15:58:47.142544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.430 ms 00:31:59.505 [2024-12-06 15:58:47.142555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.145293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.145348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:59.505 [2024-12-06 15:58:47.145364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.704 ms 00:31:59.505 [2024-12-06 15:58:47.145382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.145457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.145475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:59.505 [2024-12-06 15:58:47.145487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:59.505 [2024-12-06 15:58:47.145498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.146962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.147039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:59.505 [2024-12-06 15:58:47.147055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.445 ms 00:31:59.505 [2024-12-06 15:58:47.147065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.148430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.148480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:59.505 [2024-12-06 15:58:47.148494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.328 ms 00:31:59.505 [2024-12-06 15:58:47.148504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.149782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.149849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:59.505 [2024-12-06 15:58:47.149862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.243 ms 00:31:59.505 [2024-12-06 15:58:47.149872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.151142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.505 [2024-12-06 15:58:47.151206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:59.505 [2024-12-06 15:58:47.151219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:31:59.505 [2024-12-06 15:58:47.151229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.505 [2024-12-06 15:58:47.151264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:59.505 [2024-12-06 15:58:47.151286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:59.505 [2024-12-06 15:58:47.151301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:59.505 [2024-12-06 15:58:47.151312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:59.505 [2024-12-06 15:58:47.151323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:59.505 [2024-12-06 15:58:47.151519] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:59.505 [2024-12-06 15:58:47.151530] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c3a705be-1267-4d4c-a4d4-d257e9b4f953 00:31:59.505 [2024-12-06 15:58:47.151541] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:59.505 [2024-12-06 15:58:47.151551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:59.505 [2024-12-06 15:58:47.151570] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:59.505 [2024-12-06 15:58:47.151582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:59.505 [2024-12-06 15:58:47.151593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:59.505 [2024-12-06 15:58:47.151603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:59.506 [2024-12-06 15:58:47.151625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:59.506 [2024-12-06 15:58:47.151635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:59.506 [2024-12-06 15:58:47.151645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:59.506 [2024-12-06 15:58:47.151655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.506 [2024-12-06 15:58:47.151666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:59.506 [2024-12-06 15:58:47.151678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:31:59.506 [2024-12-06 15:58:47.151689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.154319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.506 [2024-12-06 15:58:47.154370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:59.506 [2024-12-06 15:58:47.154384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.608 ms 00:31:59.506 [2024-12-06 15:58:47.154394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.154563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:59.506 [2024-12-06 15:58:47.154578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:59.506 [2024-12-06 15:58:47.154620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.143 ms 00:31:59.506 [2024-12-06 15:58:47.154647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.164595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.164666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:59.506 [2024-12-06 15:58:47.164721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.164734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.164774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.164790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:59.506 [2024-12-06 15:58:47.164801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.164829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.165077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.165101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:59.506 [2024-12-06 15:58:47.165115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.165136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.165190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.165204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:59.506 [2024-12-06 15:58:47.165224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.165236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.181715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.181797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:59.506 [2024-12-06 15:58:47.181814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.181824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:59.506 [2024-12-06 15:58:47.193300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.193311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:59.506 [2024-12-06 15:58:47.193466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.193477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:59.506 [2024-12-06 15:58:47.193616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.193629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:59.506 [2024-12-06 15:58:47.193773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.193785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:59.506 [2024-12-06 15:58:47.193883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.193894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.193963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.193997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:59.506 [2024-12-06 15:58:47.194009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.194026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.194097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:59.506 [2024-12-06 15:58:47.194115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:59.506 [2024-12-06 15:58:47.194128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:59.506 [2024-12-06 15:58:47.194139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:59.506 [2024-12-06 15:58:47.194312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7343.807 ms, result 0 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94682 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94682 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94682 ']' 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.043 15:58:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:02.043 [2024-12-06 15:58:50.283611] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:02.043 [2024-12-06 15:58:50.284232] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94682 ] 00:32:02.043 [2024-12-06 15:58:50.440261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.043 [2024-12-06 15:58:50.481646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.303 [2024-12-06 15:58:50.902427] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:02.303 [2024-12-06 15:58:50.902516] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:02.564 [2024-12-06 15:58:51.047411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.047459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:02.564 [2024-12-06 15:58:51.047483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:02.564 [2024-12-06 15:58:51.047493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.047559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.047583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:02.564 [2024-12-06 15:58:51.047599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:32:02.564 [2024-12-06 15:58:51.047609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.047639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:02.564 [2024-12-06 15:58:51.047883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:02.564 [2024-12-06 15:58:51.047912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.047924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:02.564 [2024-12-06 15:58:51.047935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:32:02.564 [2024-12-06 15:58:51.047945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.050583] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:02.564 [2024-12-06 15:58:51.054066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.054111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:02.564 [2024-12-06 15:58:51.054136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.485 ms 00:32:02.564 [2024-12-06 15:58:51.054154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.054220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.054237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:02.564 [2024-12-06 15:58:51.054247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:02.564 [2024-12-06 15:58:51.054257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.065352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.065402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:02.564 [2024-12-06 15:58:51.065417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.033 ms 00:32:02.564 [2024-12-06 15:58:51.065427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.065490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.065506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:02.564 [2024-12-06 15:58:51.065517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:02.564 [2024-12-06 15:58:51.065535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.065612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.065635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:02.564 [2024-12-06 15:58:51.065645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:02.564 [2024-12-06 15:58:51.065658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.065692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:02.564 [2024-12-06 15:58:51.068404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.068436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:02.564 [2024-12-06 15:58:51.068450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.721 ms 00:32:02.564 [2024-12-06 15:58:51.068459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.068494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.068520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:02.564 [2024-12-06 15:58:51.068531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:02.564 [2024-12-06 15:58:51.068540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.068574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:02.564 [2024-12-06 15:58:51.068611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:02.564 [2024-12-06 15:58:51.068653] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:02.564 [2024-12-06 15:58:51.068694] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:02.564 [2024-12-06 15:58:51.068820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:02.564 [2024-12-06 15:58:51.068835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:02.564 [2024-12-06 15:58:51.068848] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:02.564 [2024-12-06 15:58:51.068862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:02.564 [2024-12-06 15:58:51.068874] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:02.564 [2024-12-06 15:58:51.068885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:02.564 [2024-12-06 15:58:51.068894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:02.564 [2024-12-06 15:58:51.068904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:02.564 [2024-12-06 15:58:51.068914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:02.564 [2024-12-06 15:58:51.068926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.068936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:02.564 [2024-12-06 15:58:51.068951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:32:02.564 [2024-12-06 15:58:51.068961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.069097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.564 [2024-12-06 15:58:51.069128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:02.564 [2024-12-06 15:58:51.069140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:32:02.564 [2024-12-06 15:58:51.069154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.564 [2024-12-06 15:58:51.069276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:02.564 [2024-12-06 15:58:51.069292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:02.564 [2024-12-06 15:58:51.069303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.564 [2024-12-06 15:58:51.069318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.564 [2024-12-06 15:58:51.069329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:02.564 [2024-12-06 15:58:51.069338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:02.564 [2024-12-06 15:58:51.069347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:02.564 [2024-12-06 15:58:51.069356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:02.564 [2024-12-06 15:58:51.069366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:02.564 [2024-12-06 15:58:51.069375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:02.565 [2024-12-06 15:58:51.069424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:02.565 [2024-12-06 15:58:51.069447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:02.565 [2024-12-06 15:58:51.069469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:02.565 [2024-12-06 15:58:51.069499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:02.565 [2024-12-06 15:58:51.069520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:02.565 [2024-12-06 15:58:51.069530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:02.565 [2024-12-06 15:58:51.069550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:02.565 [2024-12-06 15:58:51.069579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:02.565 [2024-12-06 15:58:51.069609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:02.565 [2024-12-06 15:58:51.069638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:02.565 [2024-12-06 15:58:51.069676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:02.565 [2024-12-06 15:58:51.069707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:02.565 [2024-12-06 15:58:51.069736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:02.565 [2024-12-06 15:58:51.069764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:02.565 [2024-12-06 15:58:51.069774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069783] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:02.565 [2024-12-06 15:58:51.069794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:02.565 [2024-12-06 15:58:51.069811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.565 [2024-12-06 15:58:51.069836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:02.565 [2024-12-06 15:58:51.069847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:02.565 [2024-12-06 15:58:51.069857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:02.565 [2024-12-06 15:58:51.069867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:02.565 [2024-12-06 15:58:51.069877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:02.565 [2024-12-06 15:58:51.069888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:02.565 [2024-12-06 15:58:51.069899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:02.565 [2024-12-06 15:58:51.069913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.069925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:02.565 [2024-12-06 15:58:51.069936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.069946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.069957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:02.565 [2024-12-06 15:58:51.069967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:02.565 [2024-12-06 15:58:51.069977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:02.565 [2024-12-06 15:58:51.069988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:02.565 [2024-12-06 15:58:51.069998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:02.565 [2024-12-06 15:58:51.070091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:02.565 [2024-12-06 15:58:51.070103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:02.565 [2024-12-06 15:58:51.070127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:02.565 [2024-12-06 15:58:51.070137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:02.565 [2024-12-06 15:58:51.070162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:02.565 [2024-12-06 15:58:51.070174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.565 [2024-12-06 15:58:51.070191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:02.565 [2024-12-06 15:58:51.070208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:32:02.565 [2024-12-06 15:58:51.070220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.565 [2024-12-06 15:58:51.070301] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:02.565 [2024-12-06 15:58:51.070320] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:05.924 [2024-12-06 15:58:54.142604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.142687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:05.924 [2024-12-06 15:58:54.142709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3072.323 ms 00:32:05.924 [2024-12-06 15:58:54.142728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.159970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.160028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:05.924 [2024-12-06 15:58:54.160048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.112 ms 00:32:05.924 [2024-12-06 15:58:54.160059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.160181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.160200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:05.924 [2024-12-06 15:58:54.160212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:05.924 [2024-12-06 15:58:54.160223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.176458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.176706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:05.924 [2024-12-06 15:58:54.176841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.172 ms 00:32:05.924 [2024-12-06 15:58:54.176892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.176993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.177164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:05.924 [2024-12-06 15:58:54.177227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:05.924 [2024-12-06 15:58:54.177272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.177911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.178174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:05.924 [2024-12-06 15:58:54.178202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:32:05.924 [2024-12-06 15:58:54.178214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.178287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.178306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:05.924 [2024-12-06 15:58:54.178319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:05.924 [2024-12-06 15:58:54.178331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.188522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.188716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:05.924 [2024-12-06 15:58:54.188761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.153 ms 00:32:05.924 [2024-12-06 15:58:54.188775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.201672] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:05.924 [2024-12-06 15:58:54.201715] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:05.924 [2024-12-06 15:58:54.201740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.201751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:05.924 [2024-12-06 15:58:54.201763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.817 ms 00:32:05.924 [2024-12-06 15:58:54.201773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.205689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.205728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:05.924 [2024-12-06 15:58:54.205744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.869 ms 00:32:05.924 [2024-12-06 15:58:54.205753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.207413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.207448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:05.924 [2024-12-06 15:58:54.207464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.622 ms 00:32:05.924 [2024-12-06 15:58:54.207473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.209269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.209444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:05.924 [2024-12-06 15:58:54.209470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.748 ms 00:32:05.924 [2024-12-06 15:58:54.209483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.209900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.209923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:05.924 [2024-12-06 15:58:54.209937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:32:05.924 [2024-12-06 15:58:54.209996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.241643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.241739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:05.924 [2024-12-06 15:58:54.241767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.614 ms 00:32:05.924 [2024-12-06 15:58:54.241783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.252160] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:05.924 [2024-12-06 15:58:54.253290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.253341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:05.924 [2024-12-06 15:58:54.253364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.422 ms 00:32:05.924 [2024-12-06 15:58:54.253379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.253543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.253569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:05.924 [2024-12-06 15:58:54.253601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:05.924 [2024-12-06 15:58:54.253617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.253729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.253754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:05.924 [2024-12-06 15:58:54.253780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:05.924 [2024-12-06 15:58:54.253796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.253845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.253865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:05.924 [2024-12-06 15:58:54.253881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:05.924 [2024-12-06 15:58:54.253896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.254002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:05.924 [2024-12-06 15:58:54.254029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.254046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:05.924 [2024-12-06 15:58:54.254062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:32:05.924 [2024-12-06 15:58:54.254083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.258008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.258079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:05.924 [2024-12-06 15:58:54.258118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.880 ms 00:32:05.924 [2024-12-06 15:58:54.258135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.258250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.924 [2024-12-06 15:58:54.258274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:05.924 [2024-12-06 15:58:54.258293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:32:05.924 [2024-12-06 15:58:54.258308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.924 [2024-12-06 15:58:54.260161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3212.069 ms, result 0 00:32:05.924 [2024-12-06 15:58:54.274620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.924 [2024-12-06 15:58:54.291381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:05.924 [2024-12-06 15:58:54.299485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:06.197 15:58:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.197 15:58:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:06.197 15:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:06.197 15:58:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:06.197 15:58:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:06.456 [2024-12-06 15:58:54.991862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.456 [2024-12-06 15:58:54.991915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:06.456 [2024-12-06 15:58:54.991934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:06.456 [2024-12-06 15:58:54.991963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.456 [2024-12-06 15:58:54.991995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.456 [2024-12-06 15:58:54.992010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:06.456 [2024-12-06 15:58:54.992030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:06.456 [2024-12-06 15:58:54.992040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.456 [2024-12-06 15:58:54.992076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:06.456 [2024-12-06 15:58:54.992089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:06.456 [2024-12-06 15:58:54.992100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:06.456 [2024-12-06 15:58:54.992110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:06.456 [2024-12-06 15:58:54.992185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.314 ms, result 0 00:32:06.456 true 00:32:06.456 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:06.714 { 00:32:06.714 "name": "ftl", 00:32:06.714 "properties": [ 00:32:06.714 { 00:32:06.714 "name": "superblock_version", 00:32:06.714 "value": 5, 00:32:06.714 "read-only": true 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "name": "base_device", 00:32:06.714 "bands": [ 00:32:06.714 { 00:32:06.714 "id": 0, 00:32:06.714 "state": "CLOSED", 00:32:06.714 "validity": 1.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 1, 00:32:06.714 "state": "CLOSED", 00:32:06.714 "validity": 1.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 2, 00:32:06.714 "state": "CLOSED", 00:32:06.714 "validity": 0.007843137254901933 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 3, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 4, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 5, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 6, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 7, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 8, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 9, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 10, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 11, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 12, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 13, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 14, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 15, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 16, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 17, 00:32:06.714 "state": "FREE", 00:32:06.714 "validity": 0.0 00:32:06.714 } 00:32:06.714 ], 00:32:06.714 "read-only": true 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "name": "cache_device", 00:32:06.714 "type": "bdev", 00:32:06.714 "chunks": [ 00:32:06.714 { 00:32:06.714 "id": 0, 00:32:06.714 "state": "INACTIVE", 00:32:06.714 "utilization": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 1, 00:32:06.714 "state": "OPEN", 00:32:06.714 "utilization": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 2, 00:32:06.714 "state": "OPEN", 00:32:06.714 "utilization": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 3, 00:32:06.714 "state": "FREE", 00:32:06.714 "utilization": 0.0 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "id": 4, 00:32:06.714 "state": "FREE", 00:32:06.714 "utilization": 0.0 00:32:06.714 } 00:32:06.714 ], 00:32:06.714 "read-only": true 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "name": "verbose_mode", 00:32:06.714 "value": true, 00:32:06.714 "unit": "", 00:32:06.714 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:06.714 }, 00:32:06.714 { 00:32:06.714 "name": "prep_upgrade_on_shutdown", 00:32:06.714 "value": false, 00:32:06.714 "unit": "", 00:32:06.715 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:06.715 } 00:32:06.715 ] 00:32:06.715 } 00:32:06.715 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:06.715 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:06.715 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:06.973 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:06.973 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:06.973 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:06.973 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:06.973 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:07.231 Validate MD5 checksum, iteration 1 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:07.231 15:58:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:07.231 [2024-12-06 15:58:55.889915] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:07.231 [2024-12-06 15:58:55.890430] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94751 ] 00:32:07.489 [2024-12-06 15:58:56.056272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.489 [2024-12-06 15:58:56.099613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.865  [2024-12-06T15:58:58.496Z] Copying: 531/1024 [MB] (531 MBps) [2024-12-06T15:58:59.431Z] Copying: 1024/1024 [MB] (average 531 MBps) 00:32:10.738 00:32:10.738 15:58:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:10.739 15:58:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ef327e0fd117a4b88da508b34587485 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ef327e0fd117a4b88da508b34587485 != \0\e\f\3\2\7\e\0\f\d\1\1\7\a\4\b\8\8\d\a\5\0\8\b\3\4\5\8\7\4\8\5 ]] 00:32:12.642 Validate MD5 checksum, iteration 2 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:12.642 15:59:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:12.642 [2024-12-06 15:59:01.073363] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:12.642 [2024-12-06 15:59:01.073997] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94807 ] 00:32:12.642 [2024-12-06 15:59:01.236105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.642 [2024-12-06 15:59:01.286211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.021  [2024-12-06T15:59:03.650Z] Copying: 532/1024 [MB] (532 MBps) [2024-12-06T15:59:04.218Z] Copying: 1024/1024 [MB] (average 529 MBps) 00:32:15.525 00:32:15.525 15:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:15.525 15:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=05f06c957ca4baa040f7dd82348534e1 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 05f06c957ca4baa040f7dd82348534e1 != \0\5\f\0\6\c\9\5\7\c\a\4\b\a\a\0\4\0\f\7\d\d\8\2\3\4\8\5\3\4\e\1 ]] 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94682 ]] 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94682 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94863 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94863 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94863 ']' 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.434 15:59:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.434 [2024-12-06 15:59:05.958328] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:17.434 [2024-12-06 15:59:05.958504] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94863 ] 00:32:17.434 [2024-12-06 15:59:06.107134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.434 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94682 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:17.691 [2024-12-06 15:59:06.141872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.949 [2024-12-06 15:59:06.515763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:17.949 [2024-12-06 15:59:06.515860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:18.208 [2024-12-06 15:59:06.660184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.660235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:18.208 [2024-12-06 15:59:06.660264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:18.208 [2024-12-06 15:59:06.660275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.660337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.660362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:18.208 [2024-12-06 15:59:06.660379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:18.208 [2024-12-06 15:59:06.660389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.660419] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:18.208 [2024-12-06 15:59:06.660642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:18.208 [2024-12-06 15:59:06.660701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.660715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:18.208 [2024-12-06 15:59:06.660727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:32:18.208 [2024-12-06 15:59:06.660738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.661151] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:18.208 [2024-12-06 15:59:06.665832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.665873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:18.208 [2024-12-06 15:59:06.665897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.682 ms 00:32:18.208 [2024-12-06 15:59:06.665907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.666706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.666742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:18.208 [2024-12-06 15:59:06.666757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:18.208 [2024-12-06 15:59:06.666778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.667150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.667170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:18.208 [2024-12-06 15:59:06.667182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:32:18.208 [2024-12-06 15:59:06.667192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.667238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.667253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:18.208 [2024-12-06 15:59:06.667266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:18.208 [2024-12-06 15:59:06.667285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.667317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.667342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:18.208 [2024-12-06 15:59:06.667357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:18.208 [2024-12-06 15:59:06.667368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.667403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:18.208 [2024-12-06 15:59:06.668248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.668274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:18.208 [2024-12-06 15:59:06.668287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:32:18.208 [2024-12-06 15:59:06.668297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.668336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.668356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:18.208 [2024-12-06 15:59:06.668368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:18.208 [2024-12-06 15:59:06.668387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.668412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:18.208 [2024-12-06 15:59:06.668441] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:18.208 [2024-12-06 15:59:06.668476] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:18.208 [2024-12-06 15:59:06.668504] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:18.208 [2024-12-06 15:59:06.668600] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:18.208 [2024-12-06 15:59:06.668615] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:18.208 [2024-12-06 15:59:06.668628] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:18.208 [2024-12-06 15:59:06.668650] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:18.208 [2024-12-06 15:59:06.668662] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:18.208 [2024-12-06 15:59:06.668711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:18.208 [2024-12-06 15:59:06.668731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:18.208 [2024-12-06 15:59:06.668742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:18.208 [2024-12-06 15:59:06.668759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:18.208 [2024-12-06 15:59:06.668779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.668790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:18.208 [2024-12-06 15:59:06.668806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:32:18.208 [2024-12-06 15:59:06.668817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.208 [2024-12-06 15:59:06.668896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.208 [2024-12-06 15:59:06.668911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:18.209 [2024-12-06 15:59:06.668928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:32:18.209 [2024-12-06 15:59:06.668958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.209 [2024-12-06 15:59:06.669088] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:18.209 [2024-12-06 15:59:06.669106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:18.209 [2024-12-06 15:59:06.669118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:18.209 [2024-12-06 15:59:06.669155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:18.209 [2024-12-06 15:59:06.669174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:18.209 [2024-12-06 15:59:06.669185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:18.209 [2024-12-06 15:59:06.669194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:18.209 [2024-12-06 15:59:06.669214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:18.209 [2024-12-06 15:59:06.669222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:18.209 [2024-12-06 15:59:06.669247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:18.209 [2024-12-06 15:59:06.669257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:18.209 [2024-12-06 15:59:06.669279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:18.209 [2024-12-06 15:59:06.669288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:18.209 [2024-12-06 15:59:06.669307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:18.209 [2024-12-06 15:59:06.669334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:18.209 [2024-12-06 15:59:06.669361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:18.209 [2024-12-06 15:59:06.669389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:18.209 [2024-12-06 15:59:06.669419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:18.209 [2024-12-06 15:59:06.669447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:18.209 [2024-12-06 15:59:06.669474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:18.209 [2024-12-06 15:59:06.669500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:18.209 [2024-12-06 15:59:06.669509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669519] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:18.209 [2024-12-06 15:59:06.669530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:18.209 [2024-12-06 15:59:06.669540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:18.209 [2024-12-06 15:59:06.669564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:18.209 [2024-12-06 15:59:06.669575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:18.209 [2024-12-06 15:59:06.669584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:18.209 [2024-12-06 15:59:06.669594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:18.209 [2024-12-06 15:59:06.669604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:18.209 [2024-12-06 15:59:06.669613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:18.209 [2024-12-06 15:59:06.669624] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:18.209 [2024-12-06 15:59:06.669636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:18.209 [2024-12-06 15:59:06.669657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:18.209 [2024-12-06 15:59:06.669687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:18.209 [2024-12-06 15:59:06.669696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:18.209 [2024-12-06 15:59:06.669706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:18.209 [2024-12-06 15:59:06.669719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:18.209 [2024-12-06 15:59:06.669789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:18.209 [2024-12-06 15:59:06.669801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:18.209 [2024-12-06 15:59:06.669833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:18.209 [2024-12-06 15:59:06.669844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:18.209 [2024-12-06 15:59:06.669853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:18.209 [2024-12-06 15:59:06.669864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.209 [2024-12-06 15:59:06.669874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:18.209 [2024-12-06 15:59:06.669889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:32:18.209 [2024-12-06 15:59:06.669902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.209 [2024-12-06 15:59:06.682450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.209 [2024-12-06 15:59:06.682503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:18.209 [2024-12-06 15:59:06.682532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.031 ms 00:32:18.209 [2024-12-06 15:59:06.682543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.209 [2024-12-06 15:59:06.682612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.209 [2024-12-06 15:59:06.682630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:18.209 [2024-12-06 15:59:06.682642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:18.209 [2024-12-06 15:59:06.682657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.209 [2024-12-06 15:59:06.698002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.209 [2024-12-06 15:59:06.698052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:18.209 [2024-12-06 15:59:06.698069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.271 ms 00:32:18.209 [2024-12-06 15:59:06.698081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.209 [2024-12-06 15:59:06.698141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.209 [2024-12-06 15:59:06.698157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:18.210 [2024-12-06 15:59:06.698176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:18.210 [2024-12-06 15:59:06.698197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.698335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.698366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:18.210 [2024-12-06 15:59:06.698379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:32:18.210 [2024-12-06 15:59:06.698390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.698449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.698465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:18.210 [2024-12-06 15:59:06.698476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:18.210 [2024-12-06 15:59:06.698491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.709272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.709589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:18.210 [2024-12-06 15:59:06.709629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.750 ms 00:32:18.210 [2024-12-06 15:59:06.709642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.709804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.709826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:18.210 [2024-12-06 15:59:06.709845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:18.210 [2024-12-06 15:59:06.709857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.725848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.725890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:18.210 [2024-12-06 15:59:06.725921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.917 ms 00:32:18.210 [2024-12-06 15:59:06.725933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.727011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.727046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:18.210 [2024-12-06 15:59:06.727072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:32:18.210 [2024-12-06 15:59:06.727089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.750451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.750751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:18.210 [2024-12-06 15:59:06.750781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.306 ms 00:32:18.210 [2024-12-06 15:59:06.750794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.750978] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:18.210 [2024-12-06 15:59:06.751082] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:18.210 [2024-12-06 15:59:06.751184] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:18.210 [2024-12-06 15:59:06.751297] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:18.210 [2024-12-06 15:59:06.751310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.751321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:18.210 [2024-12-06 15:59:06.751333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:32:18.210 [2024-12-06 15:59:06.751359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.751439] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:18.210 [2024-12-06 15:59:06.751460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.751472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:18.210 [2024-12-06 15:59:06.751484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:18.210 [2024-12-06 15:59:06.751502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.754047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.754095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:18.210 [2024-12-06 15:59:06.754111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.506 ms 00:32:18.210 [2024-12-06 15:59:06.754135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.754815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.754856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:18.210 [2024-12-06 15:59:06.754881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:18.210 [2024-12-06 15:59:06.754900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.210 [2024-12-06 15:59:06.755015] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:18.210 [2024-12-06 15:59:06.755321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.210 [2024-12-06 15:59:06.755334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:18.210 [2024-12-06 15:59:06.755355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:32:18.210 [2024-12-06 15:59:06.755367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.776 [2024-12-06 15:59:07.329882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.776 [2024-12-06 15:59:07.329951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:18.776 [2024-12-06 15:59:07.329972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 574.189 ms 00:32:18.776 [2024-12-06 15:59:07.329984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.776 [2024-12-06 15:59:07.331711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.776 [2024-12-06 15:59:07.331764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:18.776 [2024-12-06 15:59:07.331802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:32:18.776 [2024-12-06 15:59:07.331828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.776 [2024-12-06 15:59:07.332585] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:18.776 [2024-12-06 15:59:07.332793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.776 [2024-12-06 15:59:07.332828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:18.776 [2024-12-06 15:59:07.332841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:32:18.776 [2024-12-06 15:59:07.332852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.776 [2024-12-06 15:59:07.332901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.776 [2024-12-06 15:59:07.332926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:18.776 [2024-12-06 15:59:07.332952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:18.776 [2024-12-06 15:59:07.332965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:18.776 [2024-12-06 15:59:07.333027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 578.015 ms, result 0 00:32:18.776 [2024-12-06 15:59:07.333092] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:18.776 [2024-12-06 15:59:07.333276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:18.776 [2024-12-06 15:59:07.333288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:18.776 [2024-12-06 15:59:07.333298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:32:18.777 [2024-12-06 15:59:07.333309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.342 [2024-12-06 15:59:07.907108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.342 [2024-12-06 15:59:07.907182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:19.342 [2024-12-06 15:59:07.907213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 573.438 ms 00:32:19.342 [2024-12-06 15:59:07.907224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.342 [2024-12-06 15:59:07.908897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.342 [2024-12-06 15:59:07.908952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:19.342 [2024-12-06 15:59:07.908974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.246 ms 00:32:19.342 [2024-12-06 15:59:07.908985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.342 [2024-12-06 15:59:07.909608] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:19.342 [2024-12-06 15:59:07.909850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.342 [2024-12-06 15:59:07.909870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:19.342 [2024-12-06 15:59:07.909882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:32:19.342 [2024-12-06 15:59:07.909892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.342 [2024-12-06 15:59:07.909963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.342 [2024-12-06 15:59:07.909982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:19.342 [2024-12-06 15:59:07.909994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:19.342 [2024-12-06 15:59:07.910004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.342 [2024-12-06 15:59:07.910054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 576.967 ms, result 0 00:32:19.342 [2024-12-06 15:59:07.910106] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:19.342 [2024-12-06 15:59:07.910122] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:19.342 [2024-12-06 15:59:07.910135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.910147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:19.343 [2024-12-06 15:59:07.910185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1155.159 ms 00:32:19.343 [2024-12-06 15:59:07.910202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.910238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.910253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:19.343 [2024-12-06 15:59:07.910265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:19.343 [2024-12-06 15:59:07.910275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.917870] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:19.343 [2024-12-06 15:59:07.918010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.918029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:19.343 [2024-12-06 15:59:07.918047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.716 ms 00:32:19.343 [2024-12-06 15:59:07.918057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.918641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.918674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:19.343 [2024-12-06 15:59:07.918689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:32:19.343 [2024-12-06 15:59:07.918699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.920584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.920771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:19.343 [2024-12-06 15:59:07.920812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.863 ms 00:32:19.343 [2024-12-06 15:59:07.920824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.920878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.920905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:19.343 [2024-12-06 15:59:07.920918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:19.343 [2024-12-06 15:59:07.920928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.921082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.921100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:19.343 [2024-12-06 15:59:07.921117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:19.343 [2024-12-06 15:59:07.921126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.921152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.921165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:19.343 [2024-12-06 15:59:07.921177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:19.343 [2024-12-06 15:59:07.921187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.921232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:19.343 [2024-12-06 15:59:07.921249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.921260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:19.343 [2024-12-06 15:59:07.921270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:19.343 [2024-12-06 15:59:07.921294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.921351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:19.343 [2024-12-06 15:59:07.921365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:19.343 [2024-12-06 15:59:07.921376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:19.343 [2024-12-06 15:59:07.921386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:19.343 [2024-12-06 15:59:07.922699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1261.969 ms, result 0 00:32:19.343 [2024-12-06 15:59:07.938360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.343 [2024-12-06 15:59:07.954373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:19.343 [2024-12-06 15:59:07.962508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:20.276 Validate MD5 checksum, iteration 1 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:20.276 15:59:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:20.276 [2024-12-06 15:59:08.682855] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:20.276 [2024-12-06 15:59:08.683283] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94898 ] 00:32:20.276 [2024-12-06 15:59:08.839719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.276 [2024-12-06 15:59:08.882868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.671  [2024-12-06T15:59:11.299Z] Copying: 526/1024 [MB] (526 MBps) [2024-12-06T15:59:13.827Z] Copying: 1024/1024 [MB] (average 527 MBps) 00:32:25.134 00:32:25.392 15:59:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:25.392 15:59:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ef327e0fd117a4b88da508b34587485 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ef327e0fd117a4b88da508b34587485 != \0\e\f\3\2\7\e\0\f\d\1\1\7\a\4\b\8\8\d\a\5\0\8\b\3\4\5\8\7\4\8\5 ]] 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:27.295 Validate MD5 checksum, iteration 2 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:27.295 15:59:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:27.295 [2024-12-06 15:59:15.648445] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:27.295 [2024-12-06 15:59:15.648638] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94975 ] 00:32:27.295 [2024-12-06 15:59:15.812547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.295 [2024-12-06 15:59:15.854412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.675  [2024-12-06T15:59:18.307Z] Copying: 535/1024 [MB] (535 MBps) [2024-12-06T15:59:19.683Z] Copying: 1024/1024 [MB] (average 530 MBps) 00:32:30.990 00:32:30.990 15:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:30.990 15:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=05f06c957ca4baa040f7dd82348534e1 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 05f06c957ca4baa040f7dd82348534e1 != \0\5\f\0\6\c\9\5\7\c\a\4\b\a\a\0\4\0\f\7\d\d\8\2\3\4\8\5\3\4\e\1 ]] 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94863 ]] 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94863 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94863 ']' 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94863 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94863 00:32:32.894 killing process with pid 94863 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94863' 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94863 00:32:32.894 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94863 00:32:33.154 [2024-12-06 15:59:21.635422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:33.154 [2024-12-06 15:59:21.640410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.154 [2024-12-06 15:59:21.640453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:33.154 [2024-12-06 15:59:21.640472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:33.154 [2024-12-06 15:59:21.640483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.154 [2024-12-06 15:59:21.640513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:33.154 [2024-12-06 15:59:21.641335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.154 [2024-12-06 15:59:21.641372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:33.154 [2024-12-06 15:59:21.641387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.802 ms 00:32:33.154 [2024-12-06 15:59:21.641397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.154 [2024-12-06 15:59:21.641605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.154 [2024-12-06 15:59:21.641622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:33.154 [2024-12-06 15:59:21.641634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:32:33.154 [2024-12-06 15:59:21.641644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.154 [2024-12-06 15:59:21.642835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.154 [2024-12-06 15:59:21.642874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:33.154 [2024-12-06 15:59:21.642889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:32:33.154 [2024-12-06 15:59:21.642907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.154 [2024-12-06 15:59:21.643853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.154 [2024-12-06 15:59:21.644153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:33.154 [2024-12-06 15:59:21.644178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:32:33.154 [2024-12-06 15:59:21.644191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.154 [2024-12-06 15:59:21.645631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.645670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:33.155 [2024-12-06 15:59:21.645693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.396 ms 00:32:33.155 [2024-12-06 15:59:21.645709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.647307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.647490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:33.155 [2024-12-06 15:59:21.647516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.560 ms 00:32:33.155 [2024-12-06 15:59:21.647530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.647656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.647677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:33.155 [2024-12-06 15:59:21.647691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:32:33.155 [2024-12-06 15:59:21.647710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.649086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.649131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:33.155 [2024-12-06 15:59:21.649145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.352 ms 00:32:33.155 [2024-12-06 15:59:21.649154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.650459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.650493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:33.155 [2024-12-06 15:59:21.650506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:32:33.155 [2024-12-06 15:59:21.650515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.651676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.651710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:33.155 [2024-12-06 15:59:21.651724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.125 ms 00:32:33.155 [2024-12-06 15:59:21.651734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.652893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.652929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:33.155 [2024-12-06 15:59:21.652962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:32:33.155 [2024-12-06 15:59:21.652973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.653010] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:33.155 [2024-12-06 15:59:21.653032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:33.155 [2024-12-06 15:59:21.653046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:33.155 [2024-12-06 15:59:21.653056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:33.155 [2024-12-06 15:59:21.653069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:33.155 [2024-12-06 15:59:21.653232] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:33.155 [2024-12-06 15:59:21.653243] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c3a705be-1267-4d4c-a4d4-d257e9b4f953 00:32:33.155 [2024-12-06 15:59:21.653254] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:33.155 [2024-12-06 15:59:21.653276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:33.155 [2024-12-06 15:59:21.653292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:33.155 [2024-12-06 15:59:21.653304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:33.155 [2024-12-06 15:59:21.653314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:33.155 [2024-12-06 15:59:21.653324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:33.155 [2024-12-06 15:59:21.653339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:33.155 [2024-12-06 15:59:21.653350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:33.155 [2024-12-06 15:59:21.653361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:33.155 [2024-12-06 15:59:21.653373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.653384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:33.155 [2024-12-06 15:59:21.653395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:32:33.155 [2024-12-06 15:59:21.653406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.655505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.655532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:33.155 [2024-12-06 15:59:21.655544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.078 ms 00:32:33.155 [2024-12-06 15:59:21.655554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.655686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:33.155 [2024-12-06 15:59:21.655700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:33.155 [2024-12-06 15:59:21.655712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.101 ms 00:32:33.155 [2024-12-06 15:59:21.655723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.665025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.665183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:33.155 [2024-12-06 15:59:21.665219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.665236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.665282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.665298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:33.155 [2024-12-06 15:59:21.665309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.665320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.665395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.665412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:33.155 [2024-12-06 15:59:21.665424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.665434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.665464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.665478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:33.155 [2024-12-06 15:59:21.665489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.665500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.680603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.680662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:33.155 [2024-12-06 15:59:21.680689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.680703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.690970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.691020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:33.155 [2024-12-06 15:59:21.691037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.691048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.691143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.691161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:33.155 [2024-12-06 15:59:21.691174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.691185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.155 [2024-12-06 15:59:21.691251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.155 [2024-12-06 15:59:21.691273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:33.155 [2024-12-06 15:59:21.691285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.155 [2024-12-06 15:59:21.691296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.156 [2024-12-06 15:59:21.691384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.156 [2024-12-06 15:59:21.691401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:33.156 [2024-12-06 15:59:21.691424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.156 [2024-12-06 15:59:21.691435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.156 [2024-12-06 15:59:21.691482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.156 [2024-12-06 15:59:21.691500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:33.156 [2024-12-06 15:59:21.691517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.156 [2024-12-06 15:59:21.691528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.156 [2024-12-06 15:59:21.691576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.156 [2024-12-06 15:59:21.691592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:33.156 [2024-12-06 15:59:21.691603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.156 [2024-12-06 15:59:21.691613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.156 [2024-12-06 15:59:21.691667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:33.156 [2024-12-06 15:59:21.691689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:33.156 [2024-12-06 15:59:21.691700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:33.156 [2024-12-06 15:59:21.691722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:33.156 [2024-12-06 15:59:21.691900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 51.430 ms, result 0 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:33.414 Remove shared memory files 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94682 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:33.414 ************************************ 00:32:33.414 END TEST ftl_upgrade_shutdown 00:32:33.414 ************************************ 00:32:33.414 00:32:33.414 real 1m14.835s 00:32:33.414 user 1m38.712s 00:32:33.414 sys 0m24.180s 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.414 15:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:33.414 15:59:22 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:33.414 15:59:22 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:33.414 15:59:22 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:33.414 15:59:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.414 15:59:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:33.415 ************************************ 00:32:33.415 START TEST ftl_restore_fast 00:32:33.415 ************************************ 00:32:33.415 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:33.415 * Looking for test storage... 00:32:33.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:33.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.675 --rc genhtml_branch_coverage=1 00:32:33.675 --rc genhtml_function_coverage=1 00:32:33.675 --rc genhtml_legend=1 00:32:33.675 --rc geninfo_all_blocks=1 00:32:33.675 --rc geninfo_unexecuted_blocks=1 00:32:33.675 00:32:33.675 ' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:33.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.675 --rc genhtml_branch_coverage=1 00:32:33.675 --rc genhtml_function_coverage=1 00:32:33.675 --rc genhtml_legend=1 00:32:33.675 --rc geninfo_all_blocks=1 00:32:33.675 --rc geninfo_unexecuted_blocks=1 00:32:33.675 00:32:33.675 ' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:33.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.675 --rc genhtml_branch_coverage=1 00:32:33.675 --rc genhtml_function_coverage=1 00:32:33.675 --rc genhtml_legend=1 00:32:33.675 --rc geninfo_all_blocks=1 00:32:33.675 --rc geninfo_unexecuted_blocks=1 00:32:33.675 00:32:33.675 ' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:33.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.675 --rc genhtml_branch_coverage=1 00:32:33.675 --rc genhtml_function_coverage=1 00:32:33.675 --rc genhtml_legend=1 00:32:33.675 --rc geninfo_all_blocks=1 00:32:33.675 --rc geninfo_unexecuted_blocks=1 00:32:33.675 00:32:33.675 ' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NYBcJAQ5Ng 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:33.675 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95117 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95117 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 95117 ']' 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.676 15:59:22 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:33.934 [2024-12-06 15:59:22.376015] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:33.934 [2024-12-06 15:59:22.376227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95117 ] 00:32:33.934 [2024-12-06 15:59:22.532061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.934 [2024-12-06 15:59:22.568960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:34.870 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:35.130 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:35.389 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:35.389 { 00:32:35.389 "name": "nvme0n1", 00:32:35.389 "aliases": [ 00:32:35.389 "23f27dde-729e-48ba-afee-1ba60ddf641c" 00:32:35.389 ], 00:32:35.389 "product_name": "NVMe disk", 00:32:35.389 "block_size": 4096, 00:32:35.389 "num_blocks": 1310720, 00:32:35.389 "uuid": "23f27dde-729e-48ba-afee-1ba60ddf641c", 00:32:35.389 "numa_id": -1, 00:32:35.389 "assigned_rate_limits": { 00:32:35.389 "rw_ios_per_sec": 0, 00:32:35.389 "rw_mbytes_per_sec": 0, 00:32:35.389 "r_mbytes_per_sec": 0, 00:32:35.389 "w_mbytes_per_sec": 0 00:32:35.389 }, 00:32:35.389 "claimed": true, 00:32:35.389 "claim_type": "read_many_write_one", 00:32:35.389 "zoned": false, 00:32:35.389 "supported_io_types": { 00:32:35.389 "read": true, 00:32:35.389 "write": true, 00:32:35.389 "unmap": true, 00:32:35.389 "flush": true, 00:32:35.389 "reset": true, 00:32:35.389 "nvme_admin": true, 00:32:35.389 "nvme_io": true, 00:32:35.389 "nvme_io_md": false, 00:32:35.389 "write_zeroes": true, 00:32:35.389 "zcopy": false, 00:32:35.389 "get_zone_info": false, 00:32:35.389 "zone_management": false, 00:32:35.389 "zone_append": false, 00:32:35.389 "compare": true, 00:32:35.390 "compare_and_write": false, 00:32:35.390 "abort": true, 00:32:35.390 "seek_hole": false, 00:32:35.390 "seek_data": false, 00:32:35.390 "copy": true, 00:32:35.390 "nvme_iov_md": false 00:32:35.390 }, 00:32:35.390 "driver_specific": { 00:32:35.390 "nvme": [ 00:32:35.390 { 00:32:35.390 "pci_address": "0000:00:11.0", 00:32:35.390 "trid": { 00:32:35.390 "trtype": "PCIe", 00:32:35.390 "traddr": "0000:00:11.0" 00:32:35.390 }, 00:32:35.390 "ctrlr_data": { 00:32:35.390 "cntlid": 0, 00:32:35.390 "vendor_id": "0x1b36", 00:32:35.390 "model_number": "QEMU NVMe Ctrl", 00:32:35.390 "serial_number": "12341", 00:32:35.390 "firmware_revision": "8.0.0", 00:32:35.390 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:35.390 "oacs": { 00:32:35.390 "security": 0, 00:32:35.390 "format": 1, 00:32:35.390 "firmware": 0, 00:32:35.390 "ns_manage": 1 00:32:35.390 }, 00:32:35.390 "multi_ctrlr": false, 00:32:35.390 "ana_reporting": false 00:32:35.390 }, 00:32:35.390 "vs": { 00:32:35.390 "nvme_version": "1.4" 00:32:35.390 }, 00:32:35.390 "ns_data": { 00:32:35.390 "id": 1, 00:32:35.390 "can_share": false 00:32:35.390 } 00:32:35.390 } 00:32:35.390 ], 00:32:35.390 "mp_policy": "active_passive" 00:32:35.390 } 00:32:35.390 } 00:32:35.390 ]' 00:32:35.390 15:59:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:35.390 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:35.649 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=fe4e73d9-3f27-4391-959c-3cd81520ad2b 00:32:35.649 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:35.649 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe4e73d9-3f27-4391-959c-3cd81520ad2b 00:32:36.218 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:36.218 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6f9b6aea-ef61-4f10-b928-65a648ce3294 00:32:36.218 15:59:24 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6f9b6aea-ef61-4f10-b928-65a648ce3294 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:36.477 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:36.735 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:36.735 { 00:32:36.735 "name": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:36.735 "aliases": [ 00:32:36.735 "lvs/nvme0n1p0" 00:32:36.735 ], 00:32:36.735 "product_name": "Logical Volume", 00:32:36.735 "block_size": 4096, 00:32:36.735 "num_blocks": 26476544, 00:32:36.735 "uuid": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:36.735 "assigned_rate_limits": { 00:32:36.735 "rw_ios_per_sec": 0, 00:32:36.735 "rw_mbytes_per_sec": 0, 00:32:36.735 "r_mbytes_per_sec": 0, 00:32:36.735 "w_mbytes_per_sec": 0 00:32:36.735 }, 00:32:36.735 "claimed": false, 00:32:36.735 "zoned": false, 00:32:36.735 "supported_io_types": { 00:32:36.735 "read": true, 00:32:36.735 "write": true, 00:32:36.735 "unmap": true, 00:32:36.735 "flush": false, 00:32:36.735 "reset": true, 00:32:36.735 "nvme_admin": false, 00:32:36.735 "nvme_io": false, 00:32:36.735 "nvme_io_md": false, 00:32:36.735 "write_zeroes": true, 00:32:36.735 "zcopy": false, 00:32:36.735 "get_zone_info": false, 00:32:36.735 "zone_management": false, 00:32:36.735 "zone_append": false, 00:32:36.735 "compare": false, 00:32:36.735 "compare_and_write": false, 00:32:36.735 "abort": false, 00:32:36.735 "seek_hole": true, 00:32:36.735 "seek_data": true, 00:32:36.735 "copy": false, 00:32:36.735 "nvme_iov_md": false 00:32:36.735 }, 00:32:36.735 "driver_specific": { 00:32:36.735 "lvol": { 00:32:36.735 "lvol_store_uuid": "6f9b6aea-ef61-4f10-b928-65a648ce3294", 00:32:36.735 "base_bdev": "nvme0n1", 00:32:36.735 "thin_provision": true, 00:32:36.735 "num_allocated_clusters": 0, 00:32:36.735 "snapshot": false, 00:32:36.735 "clone": false, 00:32:36.735 "esnap_clone": false 00:32:36.735 } 00:32:36.735 } 00:32:36.735 } 00:32:36.735 ]' 00:32:36.735 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:36.736 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:37.300 { 00:32:37.300 "name": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:37.300 "aliases": [ 00:32:37.300 "lvs/nvme0n1p0" 00:32:37.300 ], 00:32:37.300 "product_name": "Logical Volume", 00:32:37.300 "block_size": 4096, 00:32:37.300 "num_blocks": 26476544, 00:32:37.300 "uuid": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:37.300 "assigned_rate_limits": { 00:32:37.300 "rw_ios_per_sec": 0, 00:32:37.300 "rw_mbytes_per_sec": 0, 00:32:37.300 "r_mbytes_per_sec": 0, 00:32:37.300 "w_mbytes_per_sec": 0 00:32:37.300 }, 00:32:37.300 "claimed": false, 00:32:37.300 "zoned": false, 00:32:37.300 "supported_io_types": { 00:32:37.300 "read": true, 00:32:37.300 "write": true, 00:32:37.300 "unmap": true, 00:32:37.300 "flush": false, 00:32:37.300 "reset": true, 00:32:37.300 "nvme_admin": false, 00:32:37.300 "nvme_io": false, 00:32:37.300 "nvme_io_md": false, 00:32:37.300 "write_zeroes": true, 00:32:37.300 "zcopy": false, 00:32:37.300 "get_zone_info": false, 00:32:37.300 "zone_management": false, 00:32:37.300 "zone_append": false, 00:32:37.300 "compare": false, 00:32:37.300 "compare_and_write": false, 00:32:37.300 "abort": false, 00:32:37.300 "seek_hole": true, 00:32:37.300 "seek_data": true, 00:32:37.300 "copy": false, 00:32:37.300 "nvme_iov_md": false 00:32:37.300 }, 00:32:37.300 "driver_specific": { 00:32:37.300 "lvol": { 00:32:37.300 "lvol_store_uuid": "6f9b6aea-ef61-4f10-b928-65a648ce3294", 00:32:37.300 "base_bdev": "nvme0n1", 00:32:37.300 "thin_provision": true, 00:32:37.300 "num_allocated_clusters": 0, 00:32:37.300 "snapshot": false, 00:32:37.300 "clone": false, 00:32:37.300 "esnap_clone": false 00:32:37.300 } 00:32:37.300 } 00:32:37.300 } 00:32:37.300 ]' 00:32:37.300 15:59:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:37.563 15:59:26 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e77f3f8-6637-4fcd-a304-2e26eefd3311 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:37.872 { 00:32:37.872 "name": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:37.872 "aliases": [ 00:32:37.872 "lvs/nvme0n1p0" 00:32:37.872 ], 00:32:37.872 "product_name": "Logical Volume", 00:32:37.872 "block_size": 4096, 00:32:37.872 "num_blocks": 26476544, 00:32:37.872 "uuid": "6e77f3f8-6637-4fcd-a304-2e26eefd3311", 00:32:37.872 "assigned_rate_limits": { 00:32:37.872 "rw_ios_per_sec": 0, 00:32:37.872 "rw_mbytes_per_sec": 0, 00:32:37.872 "r_mbytes_per_sec": 0, 00:32:37.872 "w_mbytes_per_sec": 0 00:32:37.872 }, 00:32:37.872 "claimed": false, 00:32:37.872 "zoned": false, 00:32:37.872 "supported_io_types": { 00:32:37.872 "read": true, 00:32:37.872 "write": true, 00:32:37.872 "unmap": true, 00:32:37.872 "flush": false, 00:32:37.872 "reset": true, 00:32:37.872 "nvme_admin": false, 00:32:37.872 "nvme_io": false, 00:32:37.872 "nvme_io_md": false, 00:32:37.872 "write_zeroes": true, 00:32:37.872 "zcopy": false, 00:32:37.872 "get_zone_info": false, 00:32:37.872 "zone_management": false, 00:32:37.872 "zone_append": false, 00:32:37.872 "compare": false, 00:32:37.872 "compare_and_write": false, 00:32:37.872 "abort": false, 00:32:37.872 "seek_hole": true, 00:32:37.872 "seek_data": true, 00:32:37.872 "copy": false, 00:32:37.872 "nvme_iov_md": false 00:32:37.872 }, 00:32:37.872 "driver_specific": { 00:32:37.872 "lvol": { 00:32:37.872 "lvol_store_uuid": "6f9b6aea-ef61-4f10-b928-65a648ce3294", 00:32:37.872 "base_bdev": "nvme0n1", 00:32:37.872 "thin_provision": true, 00:32:37.872 "num_allocated_clusters": 0, 00:32:37.872 "snapshot": false, 00:32:37.872 "clone": false, 00:32:37.872 "esnap_clone": false 00:32:37.872 } 00:32:37.872 } 00:32:37.872 } 00:32:37.872 ]' 00:32:37.872 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6e77f3f8-6637-4fcd-a304-2e26eefd3311 --l2p_dram_limit 10' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:38.158 15:59:26 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e77f3f8-6637-4fcd-a304-2e26eefd3311 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:38.158 [2024-12-06 15:59:26.821442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.821506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:38.158 [2024-12-06 15:59:26.821525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:38.158 [2024-12-06 15:59:26.821539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.821608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.821630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.158 [2024-12-06 15:59:26.821646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:38.158 [2024-12-06 15:59:26.821663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.821689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:38.158 [2024-12-06 15:59:26.821913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:38.158 [2024-12-06 15:59:26.821959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.821977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.158 [2024-12-06 15:59:26.821988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:32:38.158 [2024-12-06 15:59:26.822000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.822077] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8573de67-cc7d-4b52-b777-0a845efb4629 00:32:38.158 [2024-12-06 15:59:26.824273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.824498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:38.158 [2024-12-06 15:59:26.824533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:38.158 [2024-12-06 15:59:26.824545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.834800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.834843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.158 [2024-12-06 15:59:26.834874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.197 ms 00:32:38.158 [2024-12-06 15:59:26.834885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.835014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.835049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.158 [2024-12-06 15:59:26.835064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:38.158 [2024-12-06 15:59:26.835075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.835157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.835176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:38.158 [2024-12-06 15:59:26.835190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:38.158 [2024-12-06 15:59:26.835201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.835235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:38.158 [2024-12-06 15:59:26.837699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.837739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.158 [2024-12-06 15:59:26.837755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:32:38.158 [2024-12-06 15:59:26.837768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.837811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.837828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:38.158 [2024-12-06 15:59:26.837848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:38.158 [2024-12-06 15:59:26.837865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.837890] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:38.158 [2024-12-06 15:59:26.838058] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:38.158 [2024-12-06 15:59:26.838080] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:38.158 [2024-12-06 15:59:26.838097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:38.158 [2024-12-06 15:59:26.838117] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:38.158 [2024-12-06 15:59:26.838138] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:38.158 [2024-12-06 15:59:26.838151] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:38.158 [2024-12-06 15:59:26.838168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:38.158 [2024-12-06 15:59:26.838178] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:38.158 [2024-12-06 15:59:26.838190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:38.158 [2024-12-06 15:59:26.838201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.838214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:38.158 [2024-12-06 15:59:26.838231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:32:38.158 [2024-12-06 15:59:26.838245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.838334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.158 [2024-12-06 15:59:26.838355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:38.158 [2024-12-06 15:59:26.838366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:38.158 [2024-12-06 15:59:26.838380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.158 [2024-12-06 15:59:26.838474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:38.158 [2024-12-06 15:59:26.838753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:38.158 [2024-12-06 15:59:26.838779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.158 [2024-12-06 15:59:26.838795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.158 [2024-12-06 15:59:26.838807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:38.159 [2024-12-06 15:59:26.838819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.838830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:38.159 [2024-12-06 15:59:26.838843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:38.159 [2024-12-06 15:59:26.838853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:38.159 [2024-12-06 15:59:26.838865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.159 [2024-12-06 15:59:26.838876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:38.159 [2024-12-06 15:59:26.838889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:38.159 [2024-12-06 15:59:26.838899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.159 [2024-12-06 15:59:26.838914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:38.159 [2024-12-06 15:59:26.838924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:38.159 [2024-12-06 15:59:26.838949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.838963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:38.159 [2024-12-06 15:59:26.838976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:38.159 [2024-12-06 15:59:26.838986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.838999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:38.159 [2024-12-06 15:59:26.839010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:38.159 [2024-12-06 15:59:26.839045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:38.159 [2024-12-06 15:59:26.839077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:38.159 [2024-12-06 15:59:26.839112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:38.159 [2024-12-06 15:59:26.839158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.159 [2024-12-06 15:59:26.839178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:38.159 [2024-12-06 15:59:26.839189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:38.159 [2024-12-06 15:59:26.839198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.159 [2024-12-06 15:59:26.839209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:38.159 [2024-12-06 15:59:26.839219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:38.159 [2024-12-06 15:59:26.839231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:38.159 [2024-12-06 15:59:26.839251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:38.159 [2024-12-06 15:59:26.839260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:38.159 [2024-12-06 15:59:26.839291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:38.159 [2024-12-06 15:59:26.839307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.159 [2024-12-06 15:59:26.839331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:38.159 [2024-12-06 15:59:26.839341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:38.159 [2024-12-06 15:59:26.839353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:38.159 [2024-12-06 15:59:26.839362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:38.159 [2024-12-06 15:59:26.839373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:38.159 [2024-12-06 15:59:26.839389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:38.159 [2024-12-06 15:59:26.839404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:38.159 [2024-12-06 15:59:26.839420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:38.159 [2024-12-06 15:59:26.839458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:38.159 [2024-12-06 15:59:26.839474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:38.159 [2024-12-06 15:59:26.839485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:38.159 [2024-12-06 15:59:26.839497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:38.159 [2024-12-06 15:59:26.839507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:38.159 [2024-12-06 15:59:26.839522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:38.159 [2024-12-06 15:59:26.839532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:38.159 [2024-12-06 15:59:26.839544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:38.159 [2024-12-06 15:59:26.839554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:38.159 [2024-12-06 15:59:26.839611] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:38.159 [2024-12-06 15:59:26.839623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:38.159 [2024-12-06 15:59:26.839647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:38.159 [2024-12-06 15:59:26.839661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:38.159 [2024-12-06 15:59:26.839671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:38.159 [2024-12-06 15:59:26.839686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.159 [2024-12-06 15:59:26.839698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:38.159 [2024-12-06 15:59:26.839722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:32:38.159 [2024-12-06 15:59:26.839740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.159 [2024-12-06 15:59:26.839822] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:38.159 [2024-12-06 15:59:26.839840] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:41.448 [2024-12-06 15:59:29.960543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.960622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:41.448 [2024-12-06 15:59:29.960645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3120.736 ms 00:32:41.448 [2024-12-06 15:59:29.960657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.976859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.976910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:41.448 [2024-12-06 15:59:29.976932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.066 ms 00:32:41.448 [2024-12-06 15:59:29.976976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.977113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.977130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:41.448 [2024-12-06 15:59:29.977145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:41.448 [2024-12-06 15:59:29.977157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.992021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.992348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:41.448 [2024-12-06 15:59:29.992380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.770 ms 00:32:41.448 [2024-12-06 15:59:29.992394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.992445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.992459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:41.448 [2024-12-06 15:59:29.992483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:41.448 [2024-12-06 15:59:29.992493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.993142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.993160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:41.448 [2024-12-06 15:59:29.993175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:32:41.448 [2024-12-06 15:59:29.993186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:29.993348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:29.993364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:41.448 [2024-12-06 15:59:29.993378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:41.448 [2024-12-06 15:59:29.993388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.004365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.004429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:41.448 [2024-12-06 15:59:30.004458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.939 ms 00:32:41.448 [2024-12-06 15:59:30.004471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.025228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:41.448 [2024-12-06 15:59:30.030538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.030819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:41.448 [2024-12-06 15:59:30.030851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.945 ms 00:32:41.448 [2024-12-06 15:59:30.030867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.103071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.103127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:41.448 [2024-12-06 15:59:30.103148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.155 ms 00:32:41.448 [2024-12-06 15:59:30.103164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.103387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.103409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:41.448 [2024-12-06 15:59:30.103431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:32:41.448 [2024-12-06 15:59:30.103445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.107187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.107231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:41.448 [2024-12-06 15:59:30.107247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:32:41.448 [2024-12-06 15:59:30.107264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.110225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.110267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:41.448 [2024-12-06 15:59:30.110282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:32:41.448 [2024-12-06 15:59:30.110295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.448 [2024-12-06 15:59:30.110664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.448 [2024-12-06 15:59:30.110687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:41.448 [2024-12-06 15:59:30.110699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:32:41.448 [2024-12-06 15:59:30.110714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.145497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.145543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:41.707 [2024-12-06 15:59:30.145564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.759 ms 00:32:41.707 [2024-12-06 15:59:30.145577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.150444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.150488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:41.707 [2024-12-06 15:59:30.150504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:32:41.707 [2024-12-06 15:59:30.150526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.153831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.153874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:41.707 [2024-12-06 15:59:30.153888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:32:41.707 [2024-12-06 15:59:30.153901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.157657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.157838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:41.707 [2024-12-06 15:59:30.157863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:32:41.707 [2024-12-06 15:59:30.157881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.157965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.157989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:41.707 [2024-12-06 15:59:30.158002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:41.707 [2024-12-06 15:59:30.158016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.158092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:41.707 [2024-12-06 15:59:30.158111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:41.707 [2024-12-06 15:59:30.158122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:41.707 [2024-12-06 15:59:30.158138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:41.707 [2024-12-06 15:59:30.159357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3337.448 ms, result 0 00:32:41.707 { 00:32:41.707 "name": "ftl0", 00:32:41.707 "uuid": "8573de67-cc7d-4b52-b777-0a845efb4629" 00:32:41.707 } 00:32:41.707 15:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:41.707 15:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:41.966 15:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:41.966 15:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:42.226 [2024-12-06 15:59:30.701375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.226 [2024-12-06 15:59:30.701416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:42.226 [2024-12-06 15:59:30.701439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:42.226 [2024-12-06 15:59:30.701450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.226 [2024-12-06 15:59:30.701482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:42.226 [2024-12-06 15:59:30.702660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.226 [2024-12-06 15:59:30.702698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:42.226 [2024-12-06 15:59:30.702712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:32:42.226 [2024-12-06 15:59:30.702724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.702975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.703000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:42.227 [2024-12-06 15:59:30.703013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:32:42.227 [2024-12-06 15:59:30.703029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.705599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.705630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:42.227 [2024-12-06 15:59:30.705643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:32:42.227 [2024-12-06 15:59:30.705656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.710664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.710697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:42.227 [2024-12-06 15:59:30.710711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.987 ms 00:32:42.227 [2024-12-06 15:59:30.710726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.712063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.712236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:42.227 [2024-12-06 15:59:30.712259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:32:42.227 [2024-12-06 15:59:30.712273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.717502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.717547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:42.227 [2024-12-06 15:59:30.717563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.185 ms 00:32:42.227 [2024-12-06 15:59:30.717577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.717691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.717713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:42.227 [2024-12-06 15:59:30.717731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:42.227 [2024-12-06 15:59:30.717743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.719774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.719961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:42.227 [2024-12-06 15:59:30.719985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:32:42.227 [2024-12-06 15:59:30.720000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.721622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.721666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:42.227 [2024-12-06 15:59:30.721680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:32:42.227 [2024-12-06 15:59:30.721692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.722895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.722949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:42.227 [2024-12-06 15:59:30.722965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:32:42.227 [2024-12-06 15:59:30.722976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.724203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.227 [2024-12-06 15:59:30.724242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:42.227 [2024-12-06 15:59:30.724256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:32:42.227 [2024-12-06 15:59:30.724268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.227 [2024-12-06 15:59:30.724303] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:42.227 [2024-12-06 15:59:30.724329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.724995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:42.227 [2024-12-06 15:59:30.725007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:42.228 [2024-12-06 15:59:30.725556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:42.228 [2024-12-06 15:59:30.725576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8573de67-cc7d-4b52-b777-0a845efb4629 00:32:42.228 [2024-12-06 15:59:30.725590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:42.228 [2024-12-06 15:59:30.725600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:42.228 [2024-12-06 15:59:30.725612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:42.228 [2024-12-06 15:59:30.725623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:42.228 [2024-12-06 15:59:30.725635] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:42.228 [2024-12-06 15:59:30.725650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:42.228 [2024-12-06 15:59:30.725662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:42.228 [2024-12-06 15:59:30.725670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:42.228 [2024-12-06 15:59:30.725681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:42.228 [2024-12-06 15:59:30.725691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.228 [2024-12-06 15:59:30.725704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:42.228 [2024-12-06 15:59:30.725715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:32:42.228 [2024-12-06 15:59:30.725735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.728256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.228 [2024-12-06 15:59:30.728293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:42.228 [2024-12-06 15:59:30.728307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 00:32:42.228 [2024-12-06 15:59:30.728323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.728427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.228 [2024-12-06 15:59:30.728445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:42.228 [2024-12-06 15:59:30.728457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:42.228 [2024-12-06 15:59:30.728469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.737390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.737551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.228 [2024-12-06 15:59:30.737667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.737717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.737800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.738019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.228 [2024-12-06 15:59:30.738069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.738108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.738404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.738547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.228 [2024-12-06 15:59:30.738671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.738787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.738854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.739033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.228 [2024-12-06 15:59:30.739104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.739145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.754178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.754418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.228 [2024-12-06 15:59:30.754532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.754589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.766295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.766483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.228 [2024-12-06 15:59:30.766609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.766660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.766886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.766989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.228 [2024-12-06 15:59:30.767107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.228 [2024-12-06 15:59:30.767158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.228 [2024-12-06 15:59:30.767261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.228 [2024-12-06 15:59:30.767427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.228 [2024-12-06 15:59:30.767492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.229 [2024-12-06 15:59:30.767679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.229 [2024-12-06 15:59:30.767834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.229 [2024-12-06 15:59:30.767910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.229 [2024-12-06 15:59:30.768049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.229 [2024-12-06 15:59:30.768104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.229 [2024-12-06 15:59:30.768255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.229 [2024-12-06 15:59:30.768336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:42.229 [2024-12-06 15:59:30.768508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.229 [2024-12-06 15:59:30.768534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.229 [2024-12-06 15:59:30.768592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.229 [2024-12-06 15:59:30.768613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.229 [2024-12-06 15:59:30.768625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.229 [2024-12-06 15:59:30.768638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.229 [2024-12-06 15:59:30.768717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:42.229 [2024-12-06 15:59:30.768737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.229 [2024-12-06 15:59:30.768749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:42.229 [2024-12-06 15:59:30.768762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.229 [2024-12-06 15:59:30.768931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.502 ms, result 0 00:32:42.229 true 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95117 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 95117 ']' 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 95117 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95117 00:32:42.229 killing process with pid 95117 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95117' 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 95117 00:32:42.229 15:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 95117 00:32:45.519 15:59:33 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:49.708 262144+0 records in 00:32:49.708 262144+0 records out 00:32:49.708 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00512 s, 268 MB/s 00:32:49.708 15:59:37 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:51.082 15:59:39 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:51.340 [2024-12-06 15:59:39.775500] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:51.340 [2024-12-06 15:59:39.775972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95321 ] 00:32:51.340 [2024-12-06 15:59:39.940870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.340 [2024-12-06 15:59:39.987142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.599 [2024-12-06 15:59:40.134509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.599 [2024-12-06 15:59:40.134603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:51.859 [2024-12-06 15:59:40.292565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.292880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:51.859 [2024-12-06 15:59:40.292911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:51.859 [2024-12-06 15:59:40.292948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.293030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.293050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:51.859 [2024-12-06 15:59:40.293063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:51.859 [2024-12-06 15:59:40.293098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.293139] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:51.859 [2024-12-06 15:59:40.293392] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:51.859 [2024-12-06 15:59:40.293414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.293429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:51.859 [2024-12-06 15:59:40.293443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:32:51.859 [2024-12-06 15:59:40.293454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.295300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:51.859 [2024-12-06 15:59:40.298094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.298132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:51.859 [2024-12-06 15:59:40.298148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:32:51.859 [2024-12-06 15:59:40.298167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.298231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.298256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:51.859 [2024-12-06 15:59:40.298271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:51.859 [2024-12-06 15:59:40.298282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.308285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.308345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:51.859 [2024-12-06 15:59:40.308369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.930 ms 00:32:51.859 [2024-12-06 15:59:40.308390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.308501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.308518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:51.859 [2024-12-06 15:59:40.308537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:51.859 [2024-12-06 15:59:40.308547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.308631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.859 [2024-12-06 15:59:40.308650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:51.859 [2024-12-06 15:59:40.308662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:51.859 [2024-12-06 15:59:40.308676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.859 [2024-12-06 15:59:40.308738] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:51.859 [2024-12-06 15:59:40.310859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.860 [2024-12-06 15:59:40.311153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:51.860 [2024-12-06 15:59:40.311179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:32:51.860 [2024-12-06 15:59:40.311192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.860 [2024-12-06 15:59:40.311245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.860 [2024-12-06 15:59:40.311261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:51.860 [2024-12-06 15:59:40.311274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:51.860 [2024-12-06 15:59:40.311289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.860 [2024-12-06 15:59:40.311320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:51.860 [2024-12-06 15:59:40.311360] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:51.860 [2024-12-06 15:59:40.311407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:51.860 [2024-12-06 15:59:40.311429] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:51.860 [2024-12-06 15:59:40.311541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:51.860 [2024-12-06 15:59:40.311555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:51.860 [2024-12-06 15:59:40.311575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:51.860 [2024-12-06 15:59:40.311588] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:51.860 [2024-12-06 15:59:40.311600] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:51.860 [2024-12-06 15:59:40.311621] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:51.860 [2024-12-06 15:59:40.311632] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:51.860 [2024-12-06 15:59:40.311642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:51.860 [2024-12-06 15:59:40.311651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:51.860 [2024-12-06 15:59:40.311663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.860 [2024-12-06 15:59:40.311695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:51.860 [2024-12-06 15:59:40.311705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:32:51.860 [2024-12-06 15:59:40.311725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.860 [2024-12-06 15:59:40.311804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.860 [2024-12-06 15:59:40.311817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:51.860 [2024-12-06 15:59:40.311828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:51.860 [2024-12-06 15:59:40.311838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.860 [2024-12-06 15:59:40.311941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:51.860 [2024-12-06 15:59:40.311977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:51.860 [2024-12-06 15:59:40.311993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:51.860 [2024-12-06 15:59:40.312040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:51.860 [2024-12-06 15:59:40.312070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.860 [2024-12-06 15:59:40.312087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:51.860 [2024-12-06 15:59:40.312100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:51.860 [2024-12-06 15:59:40.312111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:51.860 [2024-12-06 15:59:40.312120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:51.860 [2024-12-06 15:59:40.312131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:51.860 [2024-12-06 15:59:40.312141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:51.860 [2024-12-06 15:59:40.312160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:51.860 [2024-12-06 15:59:40.312186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:51.860 [2024-12-06 15:59:40.312212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:51.860 [2024-12-06 15:59:40.312237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:51.860 [2024-12-06 15:59:40.312270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:51.860 [2024-12-06 15:59:40.312296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.860 [2024-12-06 15:59:40.312313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:51.860 [2024-12-06 15:59:40.312322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:51.860 [2024-12-06 15:59:40.312331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:51.860 [2024-12-06 15:59:40.312340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:51.860 [2024-12-06 15:59:40.312348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:51.860 [2024-12-06 15:59:40.312358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:51.860 [2024-12-06 15:59:40.312375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:51.860 [2024-12-06 15:59:40.312383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312396] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:51.860 [2024-12-06 15:59:40.312409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:51.860 [2024-12-06 15:59:40.312418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:51.860 [2024-12-06 15:59:40.312427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:51.860 [2024-12-06 15:59:40.312439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:51.860 [2024-12-06 15:59:40.312448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:51.860 [2024-12-06 15:59:40.312457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:51.860 [2024-12-06 15:59:40.312466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:51.860 [2024-12-06 15:59:40.312475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:51.861 [2024-12-06 15:59:40.312484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:51.861 [2024-12-06 15:59:40.312494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:51.861 [2024-12-06 15:59:40.312506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:51.861 [2024-12-06 15:59:40.312526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:51.861 [2024-12-06 15:59:40.312536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:51.861 [2024-12-06 15:59:40.312545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:51.861 [2024-12-06 15:59:40.312557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:51.861 [2024-12-06 15:59:40.312568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:51.861 [2024-12-06 15:59:40.312577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:51.861 [2024-12-06 15:59:40.312586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:51.861 [2024-12-06 15:59:40.312596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:51.861 [2024-12-06 15:59:40.312616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:51.861 [2024-12-06 15:59:40.312664] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:51.861 [2024-12-06 15:59:40.312674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:51.861 [2024-12-06 15:59:40.312721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:51.861 [2024-12-06 15:59:40.312730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:51.861 [2024-12-06 15:59:40.312739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:51.861 [2024-12-06 15:59:40.312753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.312763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:51.861 [2024-12-06 15:59:40.312773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:32:51.861 [2024-12-06 15:59:40.312783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.328756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.328813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:51.861 [2024-12-06 15:59:40.328831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:32:51.861 [2024-12-06 15:59:40.328853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.328969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.328985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:51.861 [2024-12-06 15:59:40.328996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:51.861 [2024-12-06 15:59:40.329007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.355613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.355685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:51.861 [2024-12-06 15:59:40.355714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.524 ms 00:32:51.861 [2024-12-06 15:59:40.355732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.355825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.355849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:51.861 [2024-12-06 15:59:40.355870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:51.861 [2024-12-06 15:59:40.355887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.356665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.356746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:51.861 [2024-12-06 15:59:40.356770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:32:51.861 [2024-12-06 15:59:40.356788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.356993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.357027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:51.861 [2024-12-06 15:59:40.357052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:32:51.861 [2024-12-06 15:59:40.357063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.365715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.365750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:51.861 [2024-12-06 15:59:40.365766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.626 ms 00:32:51.861 [2024-12-06 15:59:40.365776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.368835] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:51.861 [2024-12-06 15:59:40.369164] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:51.861 [2024-12-06 15:59:40.369187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.369216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:51.861 [2024-12-06 15:59:40.369228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:32:51.861 [2024-12-06 15:59:40.369239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.382943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.382984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:51.861 [2024-12-06 15:59:40.383001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.582 ms 00:32:51.861 [2024-12-06 15:59:40.383011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.384828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.385028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:51.861 [2024-12-06 15:59:40.385053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:32:51.861 [2024-12-06 15:59:40.385066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.386611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.861 [2024-12-06 15:59:40.386646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:51.861 [2024-12-06 15:59:40.386662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:32:51.861 [2024-12-06 15:59:40.386672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.861 [2024-12-06 15:59:40.387004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.387024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:51.862 [2024-12-06 15:59:40.387036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:32:51.862 [2024-12-06 15:59:40.387046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.410865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.410959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:51.862 [2024-12-06 15:59:40.410980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.789 ms 00:32:51.862 [2024-12-06 15:59:40.410992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.417537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:51.862 [2024-12-06 15:59:40.419775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.419806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:51.862 [2024-12-06 15:59:40.419826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.725 ms 00:32:51.862 [2024-12-06 15:59:40.419842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.419924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.419958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:51.862 [2024-12-06 15:59:40.419983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:51.862 [2024-12-06 15:59:40.420004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.420103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.420120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:51.862 [2024-12-06 15:59:40.420131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:51.862 [2024-12-06 15:59:40.420146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.420180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.420194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:51.862 [2024-12-06 15:59:40.420205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:51.862 [2024-12-06 15:59:40.420214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.420260] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:51.862 [2024-12-06 15:59:40.420279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.420289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:51.862 [2024-12-06 15:59:40.420299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:51.862 [2024-12-06 15:59:40.420309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.424185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.424224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:51.862 [2024-12-06 15:59:40.424240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:32:51.862 [2024-12-06 15:59:40.424251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.424325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:51.862 [2024-12-06 15:59:40.424353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:51.862 [2024-12-06 15:59:40.424372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:51.862 [2024-12-06 15:59:40.424386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:51.862 [2024-12-06 15:59:40.425929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.821 ms, result 0 00:32:52.798  [2024-12-06T15:59:42.867Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-06T15:59:43.804Z] Copying: 47/1024 [MB] (24 MBps) [2024-12-06T15:59:44.740Z] Copying: 71/1024 [MB] (24 MBps) [2024-12-06T15:59:45.673Z] Copying: 96/1024 [MB] (24 MBps) [2024-12-06T15:59:46.606Z] Copying: 120/1024 [MB] (23 MBps) [2024-12-06T15:59:47.545Z] Copying: 144/1024 [MB] (24 MBps) [2024-12-06T15:59:48.477Z] Copying: 168/1024 [MB] (24 MBps) [2024-12-06T15:59:49.853Z] Copying: 192/1024 [MB] (24 MBps) [2024-12-06T15:59:50.790Z] Copying: 216/1024 [MB] (24 MBps) [2024-12-06T15:59:51.725Z] Copying: 241/1024 [MB] (24 MBps) [2024-12-06T15:59:52.661Z] Copying: 265/1024 [MB] (24 MBps) [2024-12-06T15:59:53.596Z] Copying: 289/1024 [MB] (23 MBps) [2024-12-06T15:59:54.532Z] Copying: 313/1024 [MB] (24 MBps) [2024-12-06T15:59:55.467Z] Copying: 337/1024 [MB] (23 MBps) [2024-12-06T15:59:56.842Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-06T15:59:57.780Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-06T15:59:58.717Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-06T15:59:59.650Z] Copying: 433/1024 [MB] (24 MBps) [2024-12-06T16:00:00.582Z] Copying: 457/1024 [MB] (24 MBps) [2024-12-06T16:00:01.545Z] Copying: 481/1024 [MB] (24 MBps) [2024-12-06T16:00:02.480Z] Copying: 506/1024 [MB] (24 MBps) [2024-12-06T16:00:03.855Z] Copying: 531/1024 [MB] (25 MBps) [2024-12-06T16:00:04.790Z] Copying: 557/1024 [MB] (25 MBps) [2024-12-06T16:00:05.724Z] Copying: 583/1024 [MB] (26 MBps) [2024-12-06T16:00:06.655Z] Copying: 609/1024 [MB] (25 MBps) [2024-12-06T16:00:07.587Z] Copying: 632/1024 [MB] (23 MBps) [2024-12-06T16:00:08.523Z] Copying: 656/1024 [MB] (23 MBps) [2024-12-06T16:00:09.458Z] Copying: 679/1024 [MB] (23 MBps) [2024-12-06T16:00:10.834Z] Copying: 702/1024 [MB] (23 MBps) [2024-12-06T16:00:11.769Z] Copying: 726/1024 [MB] (23 MBps) [2024-12-06T16:00:12.705Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-06T16:00:13.643Z] Copying: 772/1024 [MB] (23 MBps) [2024-12-06T16:00:14.579Z] Copying: 796/1024 [MB] (23 MBps) [2024-12-06T16:00:15.513Z] Copying: 820/1024 [MB] (24 MBps) [2024-12-06T16:00:16.448Z] Copying: 844/1024 [MB] (23 MBps) [2024-12-06T16:00:17.824Z] Copying: 867/1024 [MB] (23 MBps) [2024-12-06T16:00:18.762Z] Copying: 891/1024 [MB] (23 MBps) [2024-12-06T16:00:19.699Z] Copying: 914/1024 [MB] (23 MBps) [2024-12-06T16:00:20.634Z] Copying: 937/1024 [MB] (22 MBps) [2024-12-06T16:00:21.572Z] Copying: 960/1024 [MB] (23 MBps) [2024-12-06T16:00:22.507Z] Copying: 983/1024 [MB] (23 MBps) [2024-12-06T16:00:23.444Z] Copying: 1007/1024 [MB] (23 MBps) [2024-12-06T16:00:23.444Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 16:00:23.169250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.751 [2024-12-06 16:00:23.169308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:34.751 [2024-12-06 16:00:23.169338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:34.751 [2024-12-06 16:00:23.169349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.751 [2024-12-06 16:00:23.169382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:34.751 [2024-12-06 16:00:23.170355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.751 [2024-12-06 16:00:23.170378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:34.751 [2024-12-06 16:00:23.170390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:33:34.751 [2024-12-06 16:00:23.170400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.751 [2024-12-06 16:00:23.172874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.751 [2024-12-06 16:00:23.172922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:34.751 [2024-12-06 16:00:23.172947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:33:34.751 [2024-12-06 16:00:23.172960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.751 [2024-12-06 16:00:23.173004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.751 [2024-12-06 16:00:23.173019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:34.751 [2024-12-06 16:00:23.173029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:34.751 [2024-12-06 16:00:23.173038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.751 [2024-12-06 16:00:23.173089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.751 [2024-12-06 16:00:23.173101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:34.751 [2024-12-06 16:00:23.173111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:34.751 [2024-12-06 16:00:23.173119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.751 [2024-12-06 16:00:23.173135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:34.751 [2024-12-06 16:00:23.173151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:34.751 [2024-12-06 16:00:23.173308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.173997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:34.752 [2024-12-06 16:00:23.174148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:34.752 [2024-12-06 16:00:23.174157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8573de67-cc7d-4b52-b777-0a845efb4629 00:33:34.752 [2024-12-06 16:00:23.174167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:34.752 [2024-12-06 16:00:23.174175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:34.752 [2024-12-06 16:00:23.174194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:34.752 [2024-12-06 16:00:23.174210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:34.752 [2024-12-06 16:00:23.174226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:34.753 [2024-12-06 16:00:23.174236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:34.753 [2024-12-06 16:00:23.174253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:34.753 [2024-12-06 16:00:23.174261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:34.753 [2024-12-06 16:00:23.174268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:34.753 [2024-12-06 16:00:23.174277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.753 [2024-12-06 16:00:23.174285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:34.753 [2024-12-06 16:00:23.174303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:33:34.753 [2024-12-06 16:00:23.174322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.176313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.753 [2024-12-06 16:00:23.176339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:34.753 [2024-12-06 16:00:23.176351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.973 ms 00:33:34.753 [2024-12-06 16:00:23.176368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.176489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.753 [2024-12-06 16:00:23.176507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:34.753 [2024-12-06 16:00:23.176517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:34.753 [2024-12-06 16:00:23.176526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.183987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.184025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:34.753 [2024-12-06 16:00:23.184051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.184062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.184115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.184133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:34.753 [2024-12-06 16:00:23.184143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.184153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.184210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.184228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:34.753 [2024-12-06 16:00:23.184238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.184247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.184266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.184278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:34.753 [2024-12-06 16:00:23.184288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.184303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.197054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.197109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:34.753 [2024-12-06 16:00:23.197125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.197135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:34.753 [2024-12-06 16:00:23.207137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:34.753 [2024-12-06 16:00:23.207326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:34.753 [2024-12-06 16:00:23.207416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:34.753 [2024-12-06 16:00:23.207523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:34.753 [2024-12-06 16:00:23.207593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:34.753 [2024-12-06 16:00:23.207697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:34.753 [2024-12-06 16:00:23.207782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:34.753 [2024-12-06 16:00:23.207793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:34.753 [2024-12-06 16:00:23.207812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.753 [2024-12-06 16:00:23.207987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.671 ms, result 0 00:33:35.012 00:33:35.012 00:33:35.012 16:00:23 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:35.272 [2024-12-06 16:00:23.718297] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:33:35.272 [2024-12-06 16:00:23.718473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95752 ] 00:33:35.272 [2024-12-06 16:00:23.871584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.272 [2024-12-06 16:00:23.906406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.531 [2024-12-06 16:00:24.040304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:35.531 [2024-12-06 16:00:24.040392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:35.531 [2024-12-06 16:00:24.198492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.531 [2024-12-06 16:00:24.198544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:35.531 [2024-12-06 16:00:24.198563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:35.531 [2024-12-06 16:00:24.198583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.531 [2024-12-06 16:00:24.198640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.531 [2024-12-06 16:00:24.198658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:35.531 [2024-12-06 16:00:24.198669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:35.531 [2024-12-06 16:00:24.198689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.531 [2024-12-06 16:00:24.198729] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:35.531 [2024-12-06 16:00:24.198971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:35.531 [2024-12-06 16:00:24.199002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.531 [2024-12-06 16:00:24.199017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:35.531 [2024-12-06 16:00:24.199039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:33:35.531 [2024-12-06 16:00:24.199049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.531 [2024-12-06 16:00:24.199409] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:35.531 [2024-12-06 16:00:24.199442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.531 [2024-12-06 16:00:24.199455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:35.532 [2024-12-06 16:00:24.199465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:35.532 [2024-12-06 16:00:24.199481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.199544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.199560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:35.532 [2024-12-06 16:00:24.199571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:35.532 [2024-12-06 16:00:24.199589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.199902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.199919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:35.532 [2024-12-06 16:00:24.199931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:33:35.532 [2024-12-06 16:00:24.199954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.200065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.200090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:35.532 [2024-12-06 16:00:24.200103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:33:35.532 [2024-12-06 16:00:24.200112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.200150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.200175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:35.532 [2024-12-06 16:00:24.200186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:35.532 [2024-12-06 16:00:24.200208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.200235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:35.532 [2024-12-06 16:00:24.202317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.202359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:35.532 [2024-12-06 16:00:24.202381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:33:35.532 [2024-12-06 16:00:24.202391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.202427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.202450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:35.532 [2024-12-06 16:00:24.202461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:35.532 [2024-12-06 16:00:24.202478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.202520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:35.532 [2024-12-06 16:00:24.202561] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:35.532 [2024-12-06 16:00:24.202601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:35.532 [2024-12-06 16:00:24.202627] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:35.532 [2024-12-06 16:00:24.202717] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:35.532 [2024-12-06 16:00:24.202731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:35.532 [2024-12-06 16:00:24.202744] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:35.532 [2024-12-06 16:00:24.202757] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:35.532 [2024-12-06 16:00:24.202774] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:35.532 [2024-12-06 16:00:24.202788] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:35.532 [2024-12-06 16:00:24.202799] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:35.532 [2024-12-06 16:00:24.202808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:35.532 [2024-12-06 16:00:24.202818] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:35.532 [2024-12-06 16:00:24.202839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.202849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:35.532 [2024-12-06 16:00:24.202859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:33:35.532 [2024-12-06 16:00:24.202868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.202962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.532 [2024-12-06 16:00:24.202980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:35.532 [2024-12-06 16:00:24.202997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:35.532 [2024-12-06 16:00:24.203006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.532 [2024-12-06 16:00:24.203125] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:35.532 [2024-12-06 16:00:24.203145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:35.532 [2024-12-06 16:00:24.203166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:35.532 [2024-12-06 16:00:24.203202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:35.532 [2024-12-06 16:00:24.203229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:35.532 [2024-12-06 16:00:24.203246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:35.532 [2024-12-06 16:00:24.203255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:35.532 [2024-12-06 16:00:24.203263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:35.532 [2024-12-06 16:00:24.203273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:35.532 [2024-12-06 16:00:24.203283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:35.532 [2024-12-06 16:00:24.203291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:35.532 [2024-12-06 16:00:24.203307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:35.532 [2024-12-06 16:00:24.203337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:35.532 [2024-12-06 16:00:24.203361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:35.532 [2024-12-06 16:00:24.203387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:35.532 [2024-12-06 16:00:24.203411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:35.532 [2024-12-06 16:00:24.203440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:35.532 [2024-12-06 16:00:24.203456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:35.532 [2024-12-06 16:00:24.203472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:35.532 [2024-12-06 16:00:24.203482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:35.532 [2024-12-06 16:00:24.203491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:35.532 [2024-12-06 16:00:24.203499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:35.532 [2024-12-06 16:00:24.203508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:35.532 [2024-12-06 16:00:24.203524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:35.532 [2024-12-06 16:00:24.203532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203540] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:35.532 [2024-12-06 16:00:24.203551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:35.532 [2024-12-06 16:00:24.203561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:35.532 [2024-12-06 16:00:24.203584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:35.532 [2024-12-06 16:00:24.203593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:35.532 [2024-12-06 16:00:24.203602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:35.532 [2024-12-06 16:00:24.203611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:35.532 [2024-12-06 16:00:24.203623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:35.532 [2024-12-06 16:00:24.203633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:35.532 [2024-12-06 16:00:24.203643] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:35.532 [2024-12-06 16:00:24.203656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:35.533 [2024-12-06 16:00:24.203676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:35.533 [2024-12-06 16:00:24.203685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:35.533 [2024-12-06 16:00:24.203695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:35.533 [2024-12-06 16:00:24.203704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:35.533 [2024-12-06 16:00:24.203713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:35.533 [2024-12-06 16:00:24.203722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:35.533 [2024-12-06 16:00:24.203732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:35.533 [2024-12-06 16:00:24.203741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:35.533 [2024-12-06 16:00:24.203751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:35.533 [2024-12-06 16:00:24.203814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:35.533 [2024-12-06 16:00:24.203825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:35.533 [2024-12-06 16:00:24.203852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:35.533 [2024-12-06 16:00:24.203862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:35.533 [2024-12-06 16:00:24.203871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:35.533 [2024-12-06 16:00:24.203882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.533 [2024-12-06 16:00:24.203892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:35.533 [2024-12-06 16:00:24.203902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:33:35.533 [2024-12-06 16:00:24.203919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.533 [2024-12-06 16:00:24.215230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.533 [2024-12-06 16:00:24.215511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:35.533 [2024-12-06 16:00:24.215549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.234 ms 00:33:35.533 [2024-12-06 16:00:24.215562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.533 [2024-12-06 16:00:24.215673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.533 [2024-12-06 16:00:24.215689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:35.533 [2024-12-06 16:00:24.215700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:35.533 [2024-12-06 16:00:24.215710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.235473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.235517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:35.792 [2024-12-06 16:00:24.235534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.683 ms 00:33:35.792 [2024-12-06 16:00:24.235555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.235605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.235622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:35.792 [2024-12-06 16:00:24.235635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:35.792 [2024-12-06 16:00:24.235646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.235785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.235814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:35.792 [2024-12-06 16:00:24.235834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:35.792 [2024-12-06 16:00:24.235853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.236014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.236049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:35.792 [2024-12-06 16:00:24.236074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:33:35.792 [2024-12-06 16:00:24.236091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.244759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.244798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:35.792 [2024-12-06 16:00:24.244820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.640 ms 00:33:35.792 [2024-12-06 16:00:24.244830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.244992] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:35.792 [2024-12-06 16:00:24.245016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:35.792 [2024-12-06 16:00:24.245040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.245051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:35.792 [2024-12-06 16:00:24.245063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:33:35.792 [2024-12-06 16:00:24.245078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.255847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.256124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:35.792 [2024-12-06 16:00:24.256150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.740 ms 00:33:35.792 [2024-12-06 16:00:24.256179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.792 [2024-12-06 16:00:24.256307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.792 [2024-12-06 16:00:24.256325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:35.792 [2024-12-06 16:00:24.256337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:33:35.793 [2024-12-06 16:00:24.256355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.256419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.256442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:35.793 [2024-12-06 16:00:24.256455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:35.793 [2024-12-06 16:00:24.256466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.256791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.256810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:35.793 [2024-12-06 16:00:24.256821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:33:35.793 [2024-12-06 16:00:24.256835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.256863] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:35.793 [2024-12-06 16:00:24.256880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.256891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:35.793 [2024-12-06 16:00:24.256916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:35.793 [2024-12-06 16:00:24.256927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.264892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:35.793 [2024-12-06 16:00:24.265071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.265090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:35.793 [2024-12-06 16:00:24.265101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.099 ms 00:33:35.793 [2024-12-06 16:00:24.265111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.267152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.267313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:35.793 [2024-12-06 16:00:24.267337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:33:35.793 [2024-12-06 16:00:24.267349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.267442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.267469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:35.793 [2024-12-06 16:00:24.267481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:35.793 [2024-12-06 16:00:24.267491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.267551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.267579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:35.793 [2024-12-06 16:00:24.267592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:35.793 [2024-12-06 16:00:24.267602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.267665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:35.793 [2024-12-06 16:00:24.267686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.267698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:35.793 [2024-12-06 16:00:24.267708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:35.793 [2024-12-06 16:00:24.267718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.272397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.272441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:35.793 [2024-12-06 16:00:24.272457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:33:35.793 [2024-12-06 16:00:24.272468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.272538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.793 [2024-12-06 16:00:24.272556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:35.793 [2024-12-06 16:00:24.272568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:35.793 [2024-12-06 16:00:24.272582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.793 [2024-12-06 16:00:24.273772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 74.704 ms, result 0 00:33:37.166  [2024-12-06T16:00:26.806Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T16:00:27.740Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-06T16:00:28.675Z] Copying: 79/1024 [MB] (27 MBps) [2024-12-06T16:00:29.610Z] Copying: 106/1024 [MB] (26 MBps) [2024-12-06T16:00:30.546Z] Copying: 132/1024 [MB] (26 MBps) [2024-12-06T16:00:31.481Z] Copying: 159/1024 [MB] (26 MBps) [2024-12-06T16:00:32.911Z] Copying: 186/1024 [MB] (26 MBps) [2024-12-06T16:00:33.492Z] Copying: 212/1024 [MB] (26 MBps) [2024-12-06T16:00:34.869Z] Copying: 239/1024 [MB] (26 MBps) [2024-12-06T16:00:35.802Z] Copying: 266/1024 [MB] (26 MBps) [2024-12-06T16:00:36.737Z] Copying: 293/1024 [MB] (27 MBps) [2024-12-06T16:00:37.671Z] Copying: 320/1024 [MB] (27 MBps) [2024-12-06T16:00:38.609Z] Copying: 346/1024 [MB] (26 MBps) [2024-12-06T16:00:39.545Z] Copying: 373/1024 [MB] (26 MBps) [2024-12-06T16:00:40.481Z] Copying: 399/1024 [MB] (26 MBps) [2024-12-06T16:00:41.858Z] Copying: 424/1024 [MB] (25 MBps) [2024-12-06T16:00:42.794Z] Copying: 449/1024 [MB] (25 MBps) [2024-12-06T16:00:43.730Z] Copying: 475/1024 [MB] (25 MBps) [2024-12-06T16:00:44.667Z] Copying: 500/1024 [MB] (25 MBps) [2024-12-06T16:00:45.604Z] Copying: 525/1024 [MB] (24 MBps) [2024-12-06T16:00:46.535Z] Copying: 550/1024 [MB] (25 MBps) [2024-12-06T16:00:47.468Z] Copying: 575/1024 [MB] (25 MBps) [2024-12-06T16:00:48.845Z] Copying: 600/1024 [MB] (25 MBps) [2024-12-06T16:00:49.783Z] Copying: 625/1024 [MB] (25 MBps) [2024-12-06T16:00:50.717Z] Copying: 650/1024 [MB] (25 MBps) [2024-12-06T16:00:51.649Z] Copying: 675/1024 [MB] (25 MBps) [2024-12-06T16:00:52.583Z] Copying: 702/1024 [MB] (26 MBps) [2024-12-06T16:00:53.514Z] Copying: 727/1024 [MB] (24 MBps) [2024-12-06T16:00:54.890Z] Copying: 751/1024 [MB] (24 MBps) [2024-12-06T16:00:55.457Z] Copying: 776/1024 [MB] (25 MBps) [2024-12-06T16:00:56.830Z] Copying: 802/1024 [MB] (25 MBps) [2024-12-06T16:00:57.764Z] Copying: 827/1024 [MB] (25 MBps) [2024-12-06T16:00:58.702Z] Copying: 851/1024 [MB] (24 MBps) [2024-12-06T16:00:59.639Z] Copying: 876/1024 [MB] (24 MBps) [2024-12-06T16:01:00.575Z] Copying: 901/1024 [MB] (24 MBps) [2024-12-06T16:01:01.512Z] Copying: 926/1024 [MB] (24 MBps) [2024-12-06T16:01:02.889Z] Copying: 950/1024 [MB] (24 MBps) [2024-12-06T16:01:03.488Z] Copying: 975/1024 [MB] (25 MBps) [2024-12-06T16:01:04.487Z] Copying: 1000/1024 [MB] (25 MBps) [2024-12-06T16:01:04.748Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 16:01:04.661602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.055 [2024-12-06 16:01:04.661721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:16.055 [2024-12-06 16:01:04.661753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:16.055 [2024-12-06 16:01:04.661771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.055 [2024-12-06 16:01:04.661819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:16.055 [2024-12-06 16:01:04.662955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.055 [2024-12-06 16:01:04.662992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:16.055 [2024-12-06 16:01:04.663012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:34:16.055 [2024-12-06 16:01:04.663028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.055 [2024-12-06 16:01:04.664074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.055 [2024-12-06 16:01:04.664118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:16.055 [2024-12-06 16:01:04.664139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:34:16.055 [2024-12-06 16:01:04.664154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.055 [2024-12-06 16:01:04.664222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.055 [2024-12-06 16:01:04.664245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:16.055 [2024-12-06 16:01:04.664263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:16.055 [2024-12-06 16:01:04.664278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.055 [2024-12-06 16:01:04.664369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.055 [2024-12-06 16:01:04.664393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:16.055 [2024-12-06 16:01:04.664411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:16.055 [2024-12-06 16:01:04.664426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.055 [2024-12-06 16:01:04.664455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:16.055 [2024-12-06 16:01:04.664481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:16.055 [2024-12-06 16:01:04.664508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:16.055 [2024-12-06 16:01:04.664527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:16.055 [2024-12-06 16:01:04.664544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:16.055 [2024-12-06 16:01:04.664561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:16.055 [2024-12-06 16:01:04.664578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.664932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.665997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:16.056 [2024-12-06 16:01:04.666733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:16.057 [2024-12-06 16:01:04.666875] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:16.057 [2024-12-06 16:01:04.666893] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8573de67-cc7d-4b52-b777-0a845efb4629 00:34:16.057 [2024-12-06 16:01:04.666913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:16.057 [2024-12-06 16:01:04.666929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:16.057 [2024-12-06 16:01:04.666963] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:16.057 [2024-12-06 16:01:04.666982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:16.057 [2024-12-06 16:01:04.667008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:16.057 [2024-12-06 16:01:04.667026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:16.057 [2024-12-06 16:01:04.667042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:16.057 [2024-12-06 16:01:04.667059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:16.057 [2024-12-06 16:01:04.667074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:16.057 [2024-12-06 16:01:04.667092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.057 [2024-12-06 16:01:04.667110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:16.057 [2024-12-06 16:01:04.667129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:34:16.057 [2024-12-06 16:01:04.667153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.670501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.057 [2024-12-06 16:01:04.670550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:16.057 [2024-12-06 16:01:04.670594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:34:16.057 [2024-12-06 16:01:04.670612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.670791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.057 [2024-12-06 16:01:04.670817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:16.057 [2024-12-06 16:01:04.670846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:34:16.057 [2024-12-06 16:01:04.670863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.681341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.681564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:16.057 [2024-12-06 16:01:04.681706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.681762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.681987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.682151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:16.057 [2024-12-06 16:01:04.682297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.682353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.682548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.682706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:16.057 [2024-12-06 16:01:04.682833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.682981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.683139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.683206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:16.057 [2024-12-06 16:01:04.683342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.683417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.699447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.699725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:16.057 [2024-12-06 16:01:04.699860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.699912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.713069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.713316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:16.057 [2024-12-06 16:01:04.713439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.713504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.713716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.713853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:16.057 [2024-12-06 16:01:04.713994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.714256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:16.057 [2024-12-06 16:01:04.714272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.714396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:16.057 [2024-12-06 16:01:04.714409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.714494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:16.057 [2024-12-06 16:01:04.714506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.714597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:16.057 [2024-12-06 16:01:04.714610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.057 [2024-12-06 16:01:04.714701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:16.057 [2024-12-06 16:01:04.714713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.057 [2024-12-06 16:01:04.714725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.057 [2024-12-06 16:01:04.714907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.264 ms, result 0 00:34:16.315 00:34:16.315 00:34:16.315 16:01:04 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:18.215 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:18.215 16:01:06 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:18.215 [2024-12-06 16:01:06.832542] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:34:18.215 [2024-12-06 16:01:06.832903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96186 ] 00:34:18.473 [2024-12-06 16:01:06.989141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.473 [2024-12-06 16:01:07.033668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.734 [2024-12-06 16:01:07.183398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:18.734 [2024-12-06 16:01:07.183486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:18.734 [2024-12-06 16:01:07.341469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.734 [2024-12-06 16:01:07.341515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:18.734 [2024-12-06 16:01:07.341532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:18.735 [2024-12-06 16:01:07.341543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.341613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.341631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:18.735 [2024-12-06 16:01:07.341642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:18.735 [2024-12-06 16:01:07.341661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.341699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:18.735 [2024-12-06 16:01:07.341988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:18.735 [2024-12-06 16:01:07.342020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.342036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:18.735 [2024-12-06 16:01:07.342060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:34:18.735 [2024-12-06 16:01:07.342070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.342563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:18.735 [2024-12-06 16:01:07.342594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.342607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:18.735 [2024-12-06 16:01:07.342618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:18.735 [2024-12-06 16:01:07.342635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.342695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.342711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:18.735 [2024-12-06 16:01:07.342721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:18.735 [2024-12-06 16:01:07.342731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.343085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.343111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:18.735 [2024-12-06 16:01:07.343124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:34:18.735 [2024-12-06 16:01:07.343133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.343246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.343264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:18.735 [2024-12-06 16:01:07.343276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:34:18.735 [2024-12-06 16:01:07.343285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.343312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.343326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:18.735 [2024-12-06 16:01:07.343337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:18.735 [2024-12-06 16:01:07.343346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.343378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:18.735 [2024-12-06 16:01:07.345841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.345894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:18.735 [2024-12-06 16:01:07.345909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:34:18.735 [2024-12-06 16:01:07.345918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.345969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.345985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:18.735 [2024-12-06 16:01:07.345996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:18.735 [2024-12-06 16:01:07.346005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.346057] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:18.735 [2024-12-06 16:01:07.346091] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:18.735 [2024-12-06 16:01:07.346142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:18.735 [2024-12-06 16:01:07.346168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:18.735 [2024-12-06 16:01:07.346263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:18.735 [2024-12-06 16:01:07.346277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:18.735 [2024-12-06 16:01:07.346290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:18.735 [2024-12-06 16:01:07.346304] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346336] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:18.735 [2024-12-06 16:01:07.346346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:18.735 [2024-12-06 16:01:07.346355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:18.735 [2024-12-06 16:01:07.346364] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:18.735 [2024-12-06 16:01:07.346374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.346384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:18.735 [2024-12-06 16:01:07.346394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:34:18.735 [2024-12-06 16:01:07.346403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.346484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.735 [2024-12-06 16:01:07.346498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:18.735 [2024-12-06 16:01:07.346513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:18.735 [2024-12-06 16:01:07.346523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.735 [2024-12-06 16:01:07.346641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:18.735 [2024-12-06 16:01:07.346660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:18.735 [2024-12-06 16:01:07.346677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:18.735 [2024-12-06 16:01:07.346707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:18.735 [2024-12-06 16:01:07.346737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:18.735 [2024-12-06 16:01:07.346754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:18.735 [2024-12-06 16:01:07.346763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:18.735 [2024-12-06 16:01:07.346772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:18.735 [2024-12-06 16:01:07.346782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:18.735 [2024-12-06 16:01:07.346792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:18.735 [2024-12-06 16:01:07.346801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:18.735 [2024-12-06 16:01:07.346819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:18.735 [2024-12-06 16:01:07.346851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:18.735 [2024-12-06 16:01:07.346877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:18.735 [2024-12-06 16:01:07.346903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:18.735 [2024-12-06 16:01:07.346928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:18.735 [2024-12-06 16:01:07.346964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:18.735 [2024-12-06 16:01:07.346973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:18.735 [2024-12-06 16:01:07.346982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:18.735 [2024-12-06 16:01:07.346997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:18.735 [2024-12-06 16:01:07.347007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:18.735 [2024-12-06 16:01:07.347017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:18.735 [2024-12-06 16:01:07.347026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:18.735 [2024-12-06 16:01:07.347035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:18.735 [2024-12-06 16:01:07.347043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.735 [2024-12-06 16:01:07.347053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:18.736 [2024-12-06 16:01:07.347061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:18.736 [2024-12-06 16:01:07.347070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.736 [2024-12-06 16:01:07.347079] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:18.736 [2024-12-06 16:01:07.347088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:18.736 [2024-12-06 16:01:07.347115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:18.736 [2024-12-06 16:01:07.347129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:18.736 [2024-12-06 16:01:07.347139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:18.736 [2024-12-06 16:01:07.347149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:18.736 [2024-12-06 16:01:07.347158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:18.736 [2024-12-06 16:01:07.347170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:18.736 [2024-12-06 16:01:07.347180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:18.736 [2024-12-06 16:01:07.347190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:18.736 [2024-12-06 16:01:07.347201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:18.736 [2024-12-06 16:01:07.347214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:18.736 [2024-12-06 16:01:07.347235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:18.736 [2024-12-06 16:01:07.347245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:18.736 [2024-12-06 16:01:07.347256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:18.736 [2024-12-06 16:01:07.347266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:18.736 [2024-12-06 16:01:07.347275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:18.736 [2024-12-06 16:01:07.347285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:18.736 [2024-12-06 16:01:07.347296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:18.736 [2024-12-06 16:01:07.347306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:18.736 [2024-12-06 16:01:07.347316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:18.736 [2024-12-06 16:01:07.347385] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:18.736 [2024-12-06 16:01:07.347395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:18.736 [2024-12-06 16:01:07.347415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:18.736 [2024-12-06 16:01:07.347425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:18.736 [2024-12-06 16:01:07.347434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:18.736 [2024-12-06 16:01:07.347445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.347455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:18.736 [2024-12-06 16:01:07.347465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:34:18.736 [2024-12-06 16:01:07.347476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.358818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.358868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:18.736 [2024-12-06 16:01:07.358900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.291 ms 00:34:18.736 [2024-12-06 16:01:07.358910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.359032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.359049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:18.736 [2024-12-06 16:01:07.359061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:34:18.736 [2024-12-06 16:01:07.359070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.381625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.381668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:18.736 [2024-12-06 16:01:07.381683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.481 ms 00:34:18.736 [2024-12-06 16:01:07.381694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.381756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.381772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:18.736 [2024-12-06 16:01:07.381784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:18.736 [2024-12-06 16:01:07.381793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.381933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.381970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:18.736 [2024-12-06 16:01:07.381984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:18.736 [2024-12-06 16:01:07.381996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.382141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.382172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:18.736 [2024-12-06 16:01:07.382188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:34:18.736 [2024-12-06 16:01:07.382199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.390914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.390961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:18.736 [2024-12-06 16:01:07.390998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.681 ms 00:34:18.736 [2024-12-06 16:01:07.391009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.391147] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:18.736 [2024-12-06 16:01:07.391192] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:18.736 [2024-12-06 16:01:07.391206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.391218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:18.736 [2024-12-06 16:01:07.391229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:34:18.736 [2024-12-06 16:01:07.391245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.402309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.402339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:18.736 [2024-12-06 16:01:07.402351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.043 ms 00:34:18.736 [2024-12-06 16:01:07.402375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.402490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.402506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:18.736 [2024-12-06 16:01:07.402527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:34:18.736 [2024-12-06 16:01:07.402542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.402632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.402656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:18.736 [2024-12-06 16:01:07.402667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:18.736 [2024-12-06 16:01:07.402677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.403038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.403063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:18.736 [2024-12-06 16:01:07.403076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:34:18.736 [2024-12-06 16:01:07.403090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.403118] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:18.736 [2024-12-06 16:01:07.403135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.403146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:18.736 [2024-12-06 16:01:07.403162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:34:18.736 [2024-12-06 16:01:07.403172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.411483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:18.736 [2024-12-06 16:01:07.411659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.411677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:18.736 [2024-12-06 16:01:07.411688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.463 ms 00:34:18.736 [2024-12-06 16:01:07.411698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.736 [2024-12-06 16:01:07.413845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.736 [2024-12-06 16:01:07.413881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:18.736 [2024-12-06 16:01:07.413894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:34:18.736 [2024-12-06 16:01:07.413903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.414006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.737 [2024-12-06 16:01:07.414027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:18.737 [2024-12-06 16:01:07.414037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:18.737 [2024-12-06 16:01:07.414047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.414100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.737 [2024-12-06 16:01:07.414130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:18.737 [2024-12-06 16:01:07.414141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:18.737 [2024-12-06 16:01:07.414150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.414206] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:18.737 [2024-12-06 16:01:07.414227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.737 [2024-12-06 16:01:07.414246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:18.737 [2024-12-06 16:01:07.414257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:18.737 [2024-12-06 16:01:07.414266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.418801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.737 [2024-12-06 16:01:07.418845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:18.737 [2024-12-06 16:01:07.418876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:34:18.737 [2024-12-06 16:01:07.418886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.418969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.737 [2024-12-06 16:01:07.418987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:18.737 [2024-12-06 16:01:07.418999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:18.737 [2024-12-06 16:01:07.419014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.737 [2024-12-06 16:01:07.420111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 78.086 ms, result 0 00:34:20.110  [2024-12-06T16:01:09.737Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T16:01:10.673Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-06T16:01:11.608Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-06T16:01:12.544Z] Copying: 91/1024 [MB] (23 MBps) [2024-12-06T16:01:13.479Z] Copying: 114/1024 [MB] (23 MBps) [2024-12-06T16:01:14.854Z] Copying: 137/1024 [MB] (22 MBps) [2024-12-06T16:01:15.786Z] Copying: 159/1024 [MB] (22 MBps) [2024-12-06T16:01:16.717Z] Copying: 182/1024 [MB] (22 MBps) [2024-12-06T16:01:17.651Z] Copying: 204/1024 [MB] (22 MBps) [2024-12-06T16:01:18.584Z] Copying: 228/1024 [MB] (23 MBps) [2024-12-06T16:01:19.518Z] Copying: 251/1024 [MB] (23 MBps) [2024-12-06T16:01:20.453Z] Copying: 273/1024 [MB] (22 MBps) [2024-12-06T16:01:21.829Z] Copying: 296/1024 [MB] (22 MBps) [2024-12-06T16:01:22.766Z] Copying: 318/1024 [MB] (22 MBps) [2024-12-06T16:01:23.700Z] Copying: 340/1024 [MB] (22 MBps) [2024-12-06T16:01:24.635Z] Copying: 363/1024 [MB] (22 MBps) [2024-12-06T16:01:25.571Z] Copying: 385/1024 [MB] (22 MBps) [2024-12-06T16:01:26.506Z] Copying: 407/1024 [MB] (22 MBps) [2024-12-06T16:01:27.443Z] Copying: 429/1024 [MB] (22 MBps) [2024-12-06T16:01:28.821Z] Copying: 452/1024 [MB] (22 MBps) [2024-12-06T16:01:29.757Z] Copying: 474/1024 [MB] (22 MBps) [2024-12-06T16:01:30.693Z] Copying: 497/1024 [MB] (22 MBps) [2024-12-06T16:01:31.628Z] Copying: 519/1024 [MB] (22 MBps) [2024-12-06T16:01:32.565Z] Copying: 541/1024 [MB] (22 MBps) [2024-12-06T16:01:33.504Z] Copying: 564/1024 [MB] (22 MBps) [2024-12-06T16:01:34.440Z] Copying: 586/1024 [MB] (22 MBps) [2024-12-06T16:01:35.850Z] Copying: 609/1024 [MB] (22 MBps) [2024-12-06T16:01:36.783Z] Copying: 632/1024 [MB] (22 MBps) [2024-12-06T16:01:37.719Z] Copying: 654/1024 [MB] (22 MBps) [2024-12-06T16:01:38.655Z] Copying: 677/1024 [MB] (22 MBps) [2024-12-06T16:01:39.592Z] Copying: 699/1024 [MB] (22 MBps) [2024-12-06T16:01:40.535Z] Copying: 722/1024 [MB] (22 MBps) [2024-12-06T16:01:41.469Z] Copying: 745/1024 [MB] (22 MBps) [2024-12-06T16:01:42.844Z] Copying: 767/1024 [MB] (22 MBps) [2024-12-06T16:01:43.777Z] Copying: 789/1024 [MB] (22 MBps) [2024-12-06T16:01:44.713Z] Copying: 812/1024 [MB] (22 MBps) [2024-12-06T16:01:45.659Z] Copying: 834/1024 [MB] (22 MBps) [2024-12-06T16:01:46.594Z] Copying: 857/1024 [MB] (22 MBps) [2024-12-06T16:01:47.531Z] Copying: 879/1024 [MB] (22 MBps) [2024-12-06T16:01:48.468Z] Copying: 902/1024 [MB] (22 MBps) [2024-12-06T16:01:49.846Z] Copying: 924/1024 [MB] (22 MBps) [2024-12-06T16:01:50.784Z] Copying: 947/1024 [MB] (22 MBps) [2024-12-06T16:01:51.722Z] Copying: 970/1024 [MB] (22 MBps) [2024-12-06T16:01:52.657Z] Copying: 992/1024 [MB] (22 MBps) [2024-12-06T16:01:53.593Z] Copying: 1015/1024 [MB] (22 MBps) [2024-12-06T16:01:53.855Z] Copying: 1048248/1048576 [kB] (8872 kBps) [2024-12-06T16:01:53.855Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 16:01:53.755686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.162 [2024-12-06 16:01:53.755751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:05.162 [2024-12-06 16:01:53.755770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:05.162 [2024-12-06 16:01:53.755782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.162 [2024-12-06 16:01:53.757270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:05.162 [2024-12-06 16:01:53.761925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.162 [2024-12-06 16:01:53.761973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:05.162 [2024-12-06 16:01:53.761988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.610 ms 00:35:05.162 [2024-12-06 16:01:53.761998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.162 [2024-12-06 16:01:53.770487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.162 [2024-12-06 16:01:53.770538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:05.162 [2024-12-06 16:01:53.770553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.308 ms 00:35:05.162 [2024-12-06 16:01:53.770563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.162 [2024-12-06 16:01:53.770607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.162 [2024-12-06 16:01:53.770622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:05.162 [2024-12-06 16:01:53.770634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:05.162 [2024-12-06 16:01:53.770645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.162 [2024-12-06 16:01:53.770703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.162 [2024-12-06 16:01:53.770721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:05.162 [2024-12-06 16:01:53.770732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:35:05.162 [2024-12-06 16:01:53.770742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.162 [2024-12-06 16:01:53.770760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:05.162 [2024-12-06 16:01:53.770775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:35:05.162 [2024-12-06 16:01:53.770799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.770990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:05.162 [2024-12-06 16:01:53.771262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:05.163 [2024-12-06 16:01:53.771814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:05.163 [2024-12-06 16:01:53.771830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8573de67-cc7d-4b52-b777-0a845efb4629 00:35:05.163 [2024-12-06 16:01:53.771842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:35:05.163 [2024-12-06 16:01:53.771851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129312 00:35:05.163 [2024-12-06 16:01:53.771860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:35:05.163 [2024-12-06 16:01:53.771870] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:35:05.163 [2024-12-06 16:01:53.771883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:05.163 [2024-12-06 16:01:53.771893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:05.163 [2024-12-06 16:01:53.771911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:05.163 [2024-12-06 16:01:53.771921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:05.163 [2024-12-06 16:01:53.771929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:05.163 [2024-12-06 16:01:53.771967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.163 [2024-12-06 16:01:53.771979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:05.163 [2024-12-06 16:01:53.771990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:35:05.163 [2024-12-06 16:01:53.772009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.774151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.163 [2024-12-06 16:01:53.774179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:05.163 [2024-12-06 16:01:53.774198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:35:05.163 [2024-12-06 16:01:53.774208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.774361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.163 [2024-12-06 16:01:53.774376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:05.163 [2024-12-06 16:01:53.774386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:35:05.163 [2024-12-06 16:01:53.774395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.782021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.163 [2024-12-06 16:01:53.782065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:05.163 [2024-12-06 16:01:53.782079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.163 [2024-12-06 16:01:53.782091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.782154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.163 [2024-12-06 16:01:53.782168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:05.163 [2024-12-06 16:01:53.782179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.163 [2024-12-06 16:01:53.782189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.782246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.163 [2024-12-06 16:01:53.782265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:05.163 [2024-12-06 16:01:53.782282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.163 [2024-12-06 16:01:53.782301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.782339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.163 [2024-12-06 16:01:53.782351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:05.163 [2024-12-06 16:01:53.782362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.163 [2024-12-06 16:01:53.782371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.795690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.163 [2024-12-06 16:01:53.795745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:05.163 [2024-12-06 16:01:53.795760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.163 [2024-12-06 16:01:53.795770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.163 [2024-12-06 16:01:53.806697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.806747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:05.164 [2024-12-06 16:01:53.806762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.806773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.806836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.806861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:05.164 [2024-12-06 16:01:53.806872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.806891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.806948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.806965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:05.164 [2024-12-06 16:01:53.806993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.807004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.807074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.807091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:05.164 [2024-12-06 16:01:53.807103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.807112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.807159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.807181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:05.164 [2024-12-06 16:01:53.807192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.807201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.807248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.807261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:05.164 [2024-12-06 16:01:53.807272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.807282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.807343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:05.164 [2024-12-06 16:01:53.807358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:05.164 [2024-12-06 16:01:53.807369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:05.164 [2024-12-06 16:01:53.807380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.164 [2024-12-06 16:01:53.807534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.488 ms, result 0 00:35:06.102 00:35:06.102 00:35:06.102 16:01:54 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:06.102 [2024-12-06 16:01:54.660562] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:06.102 [2024-12-06 16:01:54.660795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96650 ] 00:35:06.360 [2024-12-06 16:01:54.818612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.360 [2024-12-06 16:01:54.859667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.360 [2024-12-06 16:01:54.994489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:06.360 [2024-12-06 16:01:54.994579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:06.623 [2024-12-06 16:01:55.153166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.153226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:06.623 [2024-12-06 16:01:55.153245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:06.623 [2024-12-06 16:01:55.153256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.153327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.153347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:06.623 [2024-12-06 16:01:55.153358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:35:06.623 [2024-12-06 16:01:55.153380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.153417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:06.623 [2024-12-06 16:01:55.153721] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:06.623 [2024-12-06 16:01:55.153753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.153765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:06.623 [2024-12-06 16:01:55.153791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:35:06.623 [2024-12-06 16:01:55.153802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.154302] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:06.623 [2024-12-06 16:01:55.154335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.154348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:06.623 [2024-12-06 16:01:55.154371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:06.623 [2024-12-06 16:01:55.154386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.154447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.154462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:06.623 [2024-12-06 16:01:55.154473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:06.623 [2024-12-06 16:01:55.154483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.154791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.154816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:06.623 [2024-12-06 16:01:55.154832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:35:06.623 [2024-12-06 16:01:55.154843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.154963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.154989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:06.623 [2024-12-06 16:01:55.155002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:06.623 [2024-12-06 16:01:55.155012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.155040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.155065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:06.623 [2024-12-06 16:01:55.155077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:06.623 [2024-12-06 16:01:55.155087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.155114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:06.623 [2024-12-06 16:01:55.157164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.157198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:06.623 [2024-12-06 16:01:55.157211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:35:06.623 [2024-12-06 16:01:55.157222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.157264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.157279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:06.623 [2024-12-06 16:01:55.157291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:06.623 [2024-12-06 16:01:55.157301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.157355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:06.623 [2024-12-06 16:01:55.157390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:06.623 [2024-12-06 16:01:55.157432] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:06.623 [2024-12-06 16:01:55.157453] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:06.623 [2024-12-06 16:01:55.157543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:06.623 [2024-12-06 16:01:55.157557] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:06.623 [2024-12-06 16:01:55.157579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:06.623 [2024-12-06 16:01:55.157593] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:06.623 [2024-12-06 16:01:55.157609] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:06.623 [2024-12-06 16:01:55.157620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:06.623 [2024-12-06 16:01:55.157639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:06.623 [2024-12-06 16:01:55.157649] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:06.623 [2024-12-06 16:01:55.157662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:06.623 [2024-12-06 16:01:55.157676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.157686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:06.623 [2024-12-06 16:01:55.157696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:35:06.623 [2024-12-06 16:01:55.157705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.157792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.623 [2024-12-06 16:01:55.157818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:06.623 [2024-12-06 16:01:55.157842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:06.623 [2024-12-06 16:01:55.157852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.623 [2024-12-06 16:01:55.157969] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:06.623 [2024-12-06 16:01:55.157990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:06.623 [2024-12-06 16:01:55.158002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:06.623 [2024-12-06 16:01:55.158031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:06.623 [2024-12-06 16:01:55.158065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:06.623 [2024-12-06 16:01:55.158083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:06.623 [2024-12-06 16:01:55.158091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:06.623 [2024-12-06 16:01:55.158100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:06.623 [2024-12-06 16:01:55.158111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:06.623 [2024-12-06 16:01:55.158121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:06.623 [2024-12-06 16:01:55.158130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:06.623 [2024-12-06 16:01:55.158147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:06.623 [2024-12-06 16:01:55.158173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:06.623 [2024-12-06 16:01:55.158204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:06.623 [2024-12-06 16:01:55.158230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:06.623 [2024-12-06 16:01:55.158255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:06.623 [2024-12-06 16:01:55.158281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:06.623 [2024-12-06 16:01:55.158297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:06.623 [2024-12-06 16:01:55.158306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:06.623 [2024-12-06 16:01:55.158315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:06.623 [2024-12-06 16:01:55.158324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:06.623 [2024-12-06 16:01:55.158337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:06.623 [2024-12-06 16:01:55.158346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:06.623 [2024-12-06 16:01:55.158364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:06.623 [2024-12-06 16:01:55.158373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158381] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:06.623 [2024-12-06 16:01:55.158391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:06.623 [2024-12-06 16:01:55.158401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.623 [2024-12-06 16:01:55.158424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:06.623 [2024-12-06 16:01:55.158434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:06.623 [2024-12-06 16:01:55.158442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:06.623 [2024-12-06 16:01:55.158451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:06.623 [2024-12-06 16:01:55.158459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:06.623 [2024-12-06 16:01:55.158468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:06.623 [2024-12-06 16:01:55.158478] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:06.623 [2024-12-06 16:01:55.158493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:06.623 [2024-12-06 16:01:55.158505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:06.623 [2024-12-06 16:01:55.158514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:06.623 [2024-12-06 16:01:55.158524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:06.623 [2024-12-06 16:01:55.158534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:06.623 [2024-12-06 16:01:55.158543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:06.623 [2024-12-06 16:01:55.158552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:06.624 [2024-12-06 16:01:55.158562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:06.624 [2024-12-06 16:01:55.158571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:06.624 [2024-12-06 16:01:55.158580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:06.624 [2024-12-06 16:01:55.158589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:06.624 [2024-12-06 16:01:55.158646] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:06.624 [2024-12-06 16:01:55.158660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:06.624 [2024-12-06 16:01:55.158681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:06.624 [2024-12-06 16:01:55.158691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:06.624 [2024-12-06 16:01:55.158700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:06.624 [2024-12-06 16:01:55.158710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.158720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:06.624 [2024-12-06 16:01:55.158740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:35:06.624 [2024-12-06 16:01:55.158750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.169764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.169814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:06.624 [2024-12-06 16:01:55.169830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.951 ms 00:35:06.624 [2024-12-06 16:01:55.169840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.169957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.169975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:06.624 [2024-12-06 16:01:55.169987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:35:06.624 [2024-12-06 16:01:55.169996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.193881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.193955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:06.624 [2024-12-06 16:01:55.193990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 00:35:06.624 [2024-12-06 16:01:55.194009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.194090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.194113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:06.624 [2024-12-06 16:01:55.194131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:06.624 [2024-12-06 16:01:55.194145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.194330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.194380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:06.624 [2024-12-06 16:01:55.194399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:35:06.624 [2024-12-06 16:01:55.194415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.194618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.194652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:06.624 [2024-12-06 16:01:55.194670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:35:06.624 [2024-12-06 16:01:55.194686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.204277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.204318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:06.624 [2024-12-06 16:01:55.204335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.542 ms 00:35:06.624 [2024-12-06 16:01:55.204346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.204492] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:06.624 [2024-12-06 16:01:55.204515] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:06.624 [2024-12-06 16:01:55.204528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.204539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:06.624 [2024-12-06 16:01:55.204551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:35:06.624 [2024-12-06 16:01:55.204565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.215376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.215425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:06.624 [2024-12-06 16:01:55.215440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.787 ms 00:35:06.624 [2024-12-06 16:01:55.215450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.215572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.215589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:06.624 [2024-12-06 16:01:55.215601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:35:06.624 [2024-12-06 16:01:55.215616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.215673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.215695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:06.624 [2024-12-06 16:01:55.215717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:06.624 [2024-12-06 16:01:55.215727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.216054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.216080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:06.624 [2024-12-06 16:01:55.216093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:35:06.624 [2024-12-06 16:01:55.216103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.216130] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:06.624 [2024-12-06 16:01:55.216160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.216172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:06.624 [2024-12-06 16:01:55.216196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:06.624 [2024-12-06 16:01:55.216206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.224161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:06.624 [2024-12-06 16:01:55.224322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.224339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:06.624 [2024-12-06 16:01:55.224362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.091 ms 00:35:06.624 [2024-12-06 16:01:55.224372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.226548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.226582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:06.624 [2024-12-06 16:01:55.226595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:35:06.624 [2024-12-06 16:01:55.226605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.226668] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:06.624 [2024-12-06 16:01:55.227248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.227276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:06.624 [2024-12-06 16:01:55.227289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:35:06.624 [2024-12-06 16:01:55.227304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.227355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.227372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:06.624 [2024-12-06 16:01:55.227383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:06.624 [2024-12-06 16:01:55.227397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.227445] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:06.624 [2024-12-06 16:01:55.227462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.227472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:06.624 [2024-12-06 16:01:55.227483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:35:06.624 [2024-12-06 16:01:55.227493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.232094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.232140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:06.624 [2024-12-06 16:01:55.232156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:35:06.624 [2024-12-06 16:01:55.232166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.232236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.624 [2024-12-06 16:01:55.232254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:06.624 [2024-12-06 16:01:55.232272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:06.624 [2024-12-06 16:01:55.232283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.624 [2024-12-06 16:01:55.233403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.735 ms, result 0 00:35:07.995  [2024-12-06T16:01:57.624Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-06T16:01:58.560Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-06T16:01:59.496Z] Copying: 75/1024 [MB] (25 MBps) [2024-12-06T16:02:00.433Z] Copying: 101/1024 [MB] (25 MBps) [2024-12-06T16:02:01.808Z] Copying: 125/1024 [MB] (24 MBps) [2024-12-06T16:02:02.744Z] Copying: 150/1024 [MB] (25 MBps) [2024-12-06T16:02:03.680Z] Copying: 176/1024 [MB] (25 MBps) [2024-12-06T16:02:04.616Z] Copying: 201/1024 [MB] (25 MBps) [2024-12-06T16:02:05.549Z] Copying: 226/1024 [MB] (24 MBps) [2024-12-06T16:02:06.533Z] Copying: 250/1024 [MB] (24 MBps) [2024-12-06T16:02:07.496Z] Copying: 275/1024 [MB] (24 MBps) [2024-12-06T16:02:08.428Z] Copying: 300/1024 [MB] (24 MBps) [2024-12-06T16:02:09.800Z] Copying: 325/1024 [MB] (25 MBps) [2024-12-06T16:02:10.736Z] Copying: 351/1024 [MB] (26 MBps) [2024-12-06T16:02:11.672Z] Copying: 377/1024 [MB] (25 MBps) [2024-12-06T16:02:12.607Z] Copying: 403/1024 [MB] (26 MBps) [2024-12-06T16:02:13.542Z] Copying: 430/1024 [MB] (26 MBps) [2024-12-06T16:02:14.477Z] Copying: 457/1024 [MB] (26 MBps) [2024-12-06T16:02:15.852Z] Copying: 484/1024 [MB] (27 MBps) [2024-12-06T16:02:16.783Z] Copying: 510/1024 [MB] (26 MBps) [2024-12-06T16:02:17.717Z] Copying: 537/1024 [MB] (26 MBps) [2024-12-06T16:02:18.653Z] Copying: 564/1024 [MB] (26 MBps) [2024-12-06T16:02:19.588Z] Copying: 590/1024 [MB] (26 MBps) [2024-12-06T16:02:20.523Z] Copying: 616/1024 [MB] (26 MBps) [2024-12-06T16:02:21.460Z] Copying: 641/1024 [MB] (24 MBps) [2024-12-06T16:02:22.838Z] Copying: 663/1024 [MB] (22 MBps) [2024-12-06T16:02:23.776Z] Copying: 685/1024 [MB] (22 MBps) [2024-12-06T16:02:24.714Z] Copying: 707/1024 [MB] (21 MBps) [2024-12-06T16:02:25.650Z] Copying: 729/1024 [MB] (22 MBps) [2024-12-06T16:02:26.585Z] Copying: 751/1024 [MB] (22 MBps) [2024-12-06T16:02:27.521Z] Copying: 773/1024 [MB] (22 MBps) [2024-12-06T16:02:28.454Z] Copying: 796/1024 [MB] (22 MBps) [2024-12-06T16:02:29.827Z] Copying: 818/1024 [MB] (22 MBps) [2024-12-06T16:02:30.762Z] Copying: 841/1024 [MB] (22 MBps) [2024-12-06T16:02:31.695Z] Copying: 863/1024 [MB] (22 MBps) [2024-12-06T16:02:32.629Z] Copying: 885/1024 [MB] (22 MBps) [2024-12-06T16:02:33.563Z] Copying: 908/1024 [MB] (22 MBps) [2024-12-06T16:02:34.499Z] Copying: 930/1024 [MB] (21 MBps) [2024-12-06T16:02:35.437Z] Copying: 952/1024 [MB] (22 MBps) [2024-12-06T16:02:36.811Z] Copying: 974/1024 [MB] (22 MBps) [2024-12-06T16:02:37.753Z] Copying: 996/1024 [MB] (22 MBps) [2024-12-06T16:02:37.753Z] Copying: 1019/1024 [MB] (22 MBps) [2024-12-06T16:02:38.020Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-06 16:02:38.011417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.327 [2024-12-06 16:02:38.011560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:49.327 [2024-12-06 16:02:38.011592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:49.327 [2024-12-06 16:02:38.011609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.327 [2024-12-06 16:02:38.011668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:49.327 [2024-12-06 16:02:38.013029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.327 [2024-12-06 16:02:38.013070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:49.327 [2024-12-06 16:02:38.013090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:35:49.327 [2024-12-06 16:02:38.013114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.327 [2024-12-06 16:02:38.013449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.327 [2024-12-06 16:02:38.013484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:49.327 [2024-12-06 16:02:38.013517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:35:49.327 [2024-12-06 16:02:38.013533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.327 [2024-12-06 16:02:38.013592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.327 [2024-12-06 16:02:38.013613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:49.327 [2024-12-06 16:02:38.013631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:49.327 [2024-12-06 16:02:38.013645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.327 [2024-12-06 16:02:38.013730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.327 [2024-12-06 16:02:38.013755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:49.327 [2024-12-06 16:02:38.013772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:49.327 [2024-12-06 16:02:38.013792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.327 [2024-12-06 16:02:38.013819] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:49.327 [2024-12-06 16:02:38.013843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:49.327 [2024-12-06 16:02:38.013868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.013987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:49.327 [2024-12-06 16:02:38.014264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.014627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:49.328 [2024-12-06 16:02:38.015418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.015990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:49.603 [2024-12-06 16:02:38.016286] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:49.603 [2024-12-06 16:02:38.016302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8573de67-cc7d-4b52-b777-0a845efb4629 00:35:49.603 [2024-12-06 16:02:38.016318] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:49.603 [2024-12-06 16:02:38.016333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1824 00:35:49.603 [2024-12-06 16:02:38.016347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1792 00:35:49.603 [2024-12-06 16:02:38.016363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0179 00:35:49.603 [2024-12-06 16:02:38.016384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:49.603 [2024-12-06 16:02:38.016399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:49.603 [2024-12-06 16:02:38.016414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:49.603 [2024-12-06 16:02:38.016427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:49.603 [2024-12-06 16:02:38.016440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:49.603 [2024-12-06 16:02:38.016456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.603 [2024-12-06 16:02:38.016471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:49.603 [2024-12-06 16:02:38.016487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:35:49.603 [2024-12-06 16:02:38.016501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.603 [2024-12-06 16:02:38.019786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.603 [2024-12-06 16:02:38.019829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:49.603 [2024-12-06 16:02:38.019867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:35:49.603 [2024-12-06 16:02:38.019883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.603 [2024-12-06 16:02:38.020120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:49.603 [2024-12-06 16:02:38.020146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:49.603 [2024-12-06 16:02:38.020164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:35:49.604 [2024-12-06 16:02:38.020178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.030633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.030674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:49.604 [2024-12-06 16:02:38.030704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.030715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.030775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.030806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:49.604 [2024-12-06 16:02:38.030834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.030845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.030921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.030966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:49.604 [2024-12-06 16:02:38.030981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.030992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.031016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.031030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:49.604 [2024-12-06 16:02:38.031042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.031053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.047832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.047899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:49.604 [2024-12-06 16:02:38.047932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.047965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:49.604 [2024-12-06 16:02:38.062292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:49.604 [2024-12-06 16:02:38.062426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:49.604 [2024-12-06 16:02:38.062540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:49.604 [2024-12-06 16:02:38.062654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:49.604 [2024-12-06 16:02:38.062754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:49.604 [2024-12-06 16:02:38.062851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.062866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.062929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:49.604 [2024-12-06 16:02:38.062970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:49.604 [2024-12-06 16:02:38.062991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:49.604 [2024-12-06 16:02:38.063001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:49.604 [2024-12-06 16:02:38.063173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 51.712 ms, result 0 00:35:49.875 00:35:49.875 00:35:49.875 16:02:38 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:51.778 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:51.778 Process with pid 95117 is not found 00:35:51.778 Remove shared memory files 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95117 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 95117 ']' 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 95117 00:35:51.778 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95117) - No such process 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 95117 is not found' 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_band_md /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_l2p_l1 /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_l2p_l2 /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_l2p_l2_ctx /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_nvc_md /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_p2l_pool /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_sb /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_sb_shm /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_trim_bitmap /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_trim_log /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_trim_md /dev/hugepages/ftl_8573de67-cc7d-4b52-b777-0a845efb4629_vmap 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:51.778 00:35:51.778 real 3m18.338s 00:35:51.778 user 3m5.301s 00:35:51.778 sys 0m14.801s 00:35:51.778 16:02:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.779 16:02:40 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:51.779 ************************************ 00:35:51.779 END TEST ftl_restore_fast 00:35:51.779 ************************************ 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@14 -- # killprocess 87551 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 87551 ']' 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@958 -- # kill -0 87551 00:35:51.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87551) - No such process 00:35:51.779 Process with pid 87551 is not found 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87551 is not found' 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97118 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97118 00:35:51.779 16:02:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 97118 ']' 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.779 16:02:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:52.037 [2024-12-06 16:02:40.544830] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:52.037 [2024-12-06 16:02:40.545078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97118 ] 00:35:52.037 [2024-12-06 16:02:40.711486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.295 [2024-12-06 16:02:40.760145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.881 16:02:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.881 16:02:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:52.881 16:02:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:53.139 nvme0n1 00:35:53.139 16:02:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:53.139 16:02:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:53.139 16:02:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:53.398 16:02:41 ftl -- ftl/common.sh@28 -- # stores=6f9b6aea-ef61-4f10-b928-65a648ce3294 00:35:53.398 16:02:41 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:53.398 16:02:41 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f9b6aea-ef61-4f10-b928-65a648ce3294 00:35:53.657 16:02:42 ftl -- ftl/ftl.sh@23 -- # killprocess 97118 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 97118 ']' 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@958 -- # kill -0 97118 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@959 -- # uname 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97118 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.657 killing process with pid 97118 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97118' 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@973 -- # kill 97118 00:35:53.657 16:02:42 ftl -- common/autotest_common.sh@978 -- # wait 97118 00:35:54.224 16:02:42 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:54.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:54.483 Waiting for block devices as requested 00:35:54.483 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.483 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.483 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.742 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:00.045 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:00.045 16:02:48 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:00.045 Remove shared memory files 00:36:00.045 16:02:48 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:00.045 16:02:48 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:00.045 16:02:48 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:00.045 16:02:48 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:00.045 16:02:48 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:00.045 16:02:48 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:00.045 00:36:00.045 real 14m28.638s 00:36:00.045 user 16m53.485s 00:36:00.045 sys 1m45.767s 00:36:00.045 16:02:48 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.045 16:02:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:00.045 ************************************ 00:36:00.045 END TEST ftl 00:36:00.045 ************************************ 00:36:00.045 16:02:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:00.045 16:02:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:00.045 16:02:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:00.045 16:02:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:00.045 16:02:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:00.045 16:02:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:00.045 16:02:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:00.045 16:02:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:00.045 16:02:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:00.045 16:02:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:00.045 16:02:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.045 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:36:00.045 16:02:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:00.045 16:02:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:00.045 16:02:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:00.045 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:36:01.946 INFO: APP EXITING 00:36:01.946 INFO: killing all VMs 00:36:01.946 INFO: killing vhost app 00:36:01.946 INFO: EXIT DONE 00:36:01.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:02.513 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:02.513 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:02.513 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:02.513 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:02.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:03.338 Cleaning 00:36:03.338 Removing: /var/run/dpdk/spdk0/config 00:36:03.338 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:03.338 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:03.338 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:03.338 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:03.338 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:03.338 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:03.338 Removing: /var/run/dpdk/spdk0 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70038 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70213 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70424 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70513 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70541 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70660 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70678 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70862 00:36:03.338 Removing: /var/run/dpdk/spdk_pid70947 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71038 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71138 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71224 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71258 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71302 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71367 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71473 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71930 00:36:03.338 Removing: /var/run/dpdk/spdk_pid71984 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72036 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72052 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72121 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72137 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72212 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72228 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72281 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72299 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72352 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72370 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72513 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72550 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72628 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72800 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72871 00:36:03.338 Removing: /var/run/dpdk/spdk_pid72902 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73367 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73460 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73558 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73600 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73628 00:36:03.338 Removing: /var/run/dpdk/spdk_pid73706 00:36:03.338 Removing: /var/run/dpdk/spdk_pid74327 00:36:03.338 Removing: /var/run/dpdk/spdk_pid74358 00:36:03.338 Removing: /var/run/dpdk/spdk_pid74860 00:36:03.338 Removing: /var/run/dpdk/spdk_pid74953 00:36:03.338 Removing: /var/run/dpdk/spdk_pid75056 00:36:03.338 Removing: /var/run/dpdk/spdk_pid75099 00:36:03.338 Removing: /var/run/dpdk/spdk_pid75124 00:36:03.338 Removing: /var/run/dpdk/spdk_pid75150 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77003 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77129 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77133 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77151 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77191 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77195 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77212 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77257 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77261 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77273 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77323 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77327 00:36:03.338 Removing: /var/run/dpdk/spdk_pid77339 00:36:03.338 Removing: /var/run/dpdk/spdk_pid78748 00:36:03.338 Removing: /var/run/dpdk/spdk_pid78845 00:36:03.338 Removing: /var/run/dpdk/spdk_pid80254 00:36:03.338 Removing: /var/run/dpdk/spdk_pid81964 00:36:03.338 Removing: /var/run/dpdk/spdk_pid82027 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82097 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82196 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82282 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82367 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82430 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82494 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82593 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82681 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82771 00:36:03.596 Removing: /var/run/dpdk/spdk_pid82829 00:36:03.597 Removing: /var/run/dpdk/spdk_pid82899 00:36:03.597 Removing: /var/run/dpdk/spdk_pid82996 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83088 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83173 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83236 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83300 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83399 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83484 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83570 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83620 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83690 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83753 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83822 00:36:03.597 Removing: /var/run/dpdk/spdk_pid83920 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84000 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84089 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84151 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84210 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84279 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84342 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84440 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84527 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84665 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84938 00:36:03.597 Removing: /var/run/dpdk/spdk_pid84969 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85427 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85605 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85694 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85794 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85831 00:36:03.597 Removing: /var/run/dpdk/spdk_pid85856 00:36:03.597 Removing: /var/run/dpdk/spdk_pid86158 00:36:03.597 Removing: /var/run/dpdk/spdk_pid86196 00:36:03.597 Removing: /var/run/dpdk/spdk_pid86258 00:36:03.597 Removing: /var/run/dpdk/spdk_pid86634 00:36:03.597 Removing: /var/run/dpdk/spdk_pid86772 00:36:03.597 Removing: /var/run/dpdk/spdk_pid87551 00:36:03.597 Removing: /var/run/dpdk/spdk_pid87672 00:36:03.597 Removing: /var/run/dpdk/spdk_pid87842 00:36:03.597 Removing: /var/run/dpdk/spdk_pid87934 00:36:03.597 Removing: /var/run/dpdk/spdk_pid88326 00:36:03.597 Removing: /var/run/dpdk/spdk_pid88574 00:36:03.597 Removing: /var/run/dpdk/spdk_pid88909 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89097 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89230 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89271 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89408 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89429 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89465 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89663 00:36:03.597 Removing: /var/run/dpdk/spdk_pid89862 00:36:03.597 Removing: /var/run/dpdk/spdk_pid90314 00:36:03.597 Removing: /var/run/dpdk/spdk_pid90789 00:36:03.597 Removing: /var/run/dpdk/spdk_pid91263 00:36:03.597 Removing: /var/run/dpdk/spdk_pid91821 00:36:03.597 Removing: /var/run/dpdk/spdk_pid91969 00:36:03.597 Removing: /var/run/dpdk/spdk_pid92046 00:36:03.597 Removing: /var/run/dpdk/spdk_pid92733 00:36:03.597 Removing: /var/run/dpdk/spdk_pid92800 00:36:03.597 Removing: /var/run/dpdk/spdk_pid93306 00:36:03.597 Removing: /var/run/dpdk/spdk_pid93721 00:36:03.597 Removing: /var/run/dpdk/spdk_pid94196 00:36:03.597 Removing: /var/run/dpdk/spdk_pid94324 00:36:03.597 Removing: /var/run/dpdk/spdk_pid94355 00:36:03.597 Removing: /var/run/dpdk/spdk_pid94404 00:36:03.597 Removing: /var/run/dpdk/spdk_pid94463 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94511 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94682 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94751 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94807 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94863 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94898 00:36:03.855 Removing: /var/run/dpdk/spdk_pid94975 00:36:03.855 Removing: /var/run/dpdk/spdk_pid95117 00:36:03.855 Removing: /var/run/dpdk/spdk_pid95321 00:36:03.855 Removing: /var/run/dpdk/spdk_pid95752 00:36:03.855 Removing: /var/run/dpdk/spdk_pid96186 00:36:03.855 Removing: /var/run/dpdk/spdk_pid96650 00:36:03.855 Removing: /var/run/dpdk/spdk_pid97118 00:36:03.855 Clean 00:36:03.855 16:02:52 -- common/autotest_common.sh@1453 -- # return 0 00:36:03.855 16:02:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:03.855 16:02:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.855 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:36:03.855 16:02:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:03.855 16:02:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.855 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:36:03.855 16:02:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:03.855 16:02:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:03.855 16:02:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:03.855 16:02:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:03.855 16:02:52 -- spdk/autotest.sh@398 -- # hostname 00:36:03.855 16:02:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:04.113 geninfo: WARNING: invalid characters removed from testname! 00:36:30.662 16:03:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:30.662 16:03:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:32.562 16:03:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:35.092 16:03:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:37.627 16:03:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:39.532 16:03:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:42.823 16:03:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:42.823 16:03:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:42.823 16:03:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:42.823 16:03:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:42.823 16:03:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:42.823 16:03:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:42.823 + [[ -n 6038 ]] 00:36:42.823 + sudo kill 6038 00:36:42.833 [Pipeline] } 00:36:42.850 [Pipeline] // timeout 00:36:42.856 [Pipeline] } 00:36:42.870 [Pipeline] // stage 00:36:42.876 [Pipeline] } 00:36:42.890 [Pipeline] // catchError 00:36:42.901 [Pipeline] stage 00:36:42.904 [Pipeline] { (Stop VM) 00:36:42.917 [Pipeline] sh 00:36:43.199 + vagrant halt 00:36:46.569 ==> default: Halting domain... 00:36:53.149 [Pipeline] sh 00:36:53.435 + vagrant destroy -f 00:36:55.968 ==> default: Removing domain... 00:36:56.238 [Pipeline] sh 00:36:56.530 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:56.557 [Pipeline] } 00:36:56.570 [Pipeline] // stage 00:36:56.574 [Pipeline] } 00:36:56.584 [Pipeline] // dir 00:36:56.587 [Pipeline] } 00:36:56.597 [Pipeline] // wrap 00:36:56.601 [Pipeline] } 00:36:56.609 [Pipeline] // catchError 00:36:56.617 [Pipeline] stage 00:36:56.620 [Pipeline] { (Epilogue) 00:36:56.631 [Pipeline] sh 00:36:56.910 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:02.194 [Pipeline] catchError 00:37:02.196 [Pipeline] { 00:37:02.214 [Pipeline] sh 00:37:02.502 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:02.761 Artifacts sizes are good 00:37:02.771 [Pipeline] } 00:37:02.788 [Pipeline] // catchError 00:37:02.802 [Pipeline] archiveArtifacts 00:37:02.810 Archiving artifacts 00:37:02.913 [Pipeline] cleanWs 00:37:02.925 [WS-CLEANUP] Deleting project workspace... 00:37:02.925 [WS-CLEANUP] Deferred wipeout is used... 00:37:02.931 [WS-CLEANUP] done 00:37:02.932 [Pipeline] } 00:37:02.947 [Pipeline] // stage 00:37:02.953 [Pipeline] } 00:37:02.967 [Pipeline] // node 00:37:02.973 [Pipeline] End of Pipeline 00:37:03.005 Finished: SUCCESS